id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
379307 | https://en.wikipedia.org/wiki/Nitrite | Nitrite | The nitrite ion has the chemical formula . Nitrite (mostly sodium nitrite) is widely used throughout chemical and pharmaceutical industries. The nitrite anion is a pervasive intermediate in the nitrogen cycle in nature. The name nitrite also refers to organic compounds having the –ONO group, which are esters of nitrous acid.
Production
Sodium nitrite is made industrially by passing a mixture of nitrogen oxides into aqueous sodium hydroxide or sodium carbonate solution:
The product is purified by recrystallization. Alkali metal nitrites are thermally stable up to and beyond their melting point (441 °C for KNO2). Ammonium nitrite can be made from dinitrogen trioxide, N2O3, which is formally the anhydride of nitrous acid:
2 NH3 + H2O + N2O3 → 2 NH4NO2
Structure
The nitrite ion has a symmetrical structure (C2v symmetry), with both N–O bonds having equal length and a bond angle of about 115°. In valence bond theory, it is described as a resonance hybrid with equal contributions from two canonical forms that are mirror images of each other. In molecular orbital theory, there is a sigma bond between each oxygen atom and the nitrogen atom, and a delocalized pi bond made from the p orbitals on nitrogen and oxygen atoms which is perpendicular to the plane of the molecule. The negative charge of the ion is equally distributed on the two oxygen atoms. Both nitrogen and oxygen atoms carry a lone pair of electrons. Therefore, the nitrite ion is a Lewis base.
In the gas phase it exists predominantly as a trans-planar molecule.
Reactions
Acid-base properties
Nitrite is the conjugate base of the weak acid nitrous acid:
HNO2 H+ + ; pKa ≈ 3.3 at 18 °C
Nitrous acid is also highly unstable, tending to disproportionate:
3 HNO2 (aq) H3O+ + + 2 NO
This reaction is slow at 0 °C. Addition of acid to a solution of a nitrite in the presence of a reducing agent, such as iron(II), is a way to make nitric oxide (NO) in the laboratory.
Oxidation and reduction
The formal oxidation state of the nitrogen atom in nitrite is +3. This means that it can be either oxidized to oxidation states +4 and +5, or reduced to oxidation states as low as −3. Standard reduction potentials for reactions directly involving nitrous acid are shown in the table below:
{|class="wikitable"
|-
!Half-reaction||E0 (V)
|-
| + 3 H+ + 2 e− HNO2 + H2O|| +0.94
|-
| 2 HNO2 + 4 H+ + 4 e− H2N2O2 + 2 H2O||+0.86
|-
| N2O4 + 2 H+ + 2 e− 2 HNO2||+1.065
|-
| 2 HNO2+ 4 H+ + 4 e− N2O + 3 H2O||+1.29
|}
The data can be extended to include products in lower oxidation states. For example:
H2N2O2 + 2 H+ + 2 e− N2 + 2 H2O; E0 = +2.65 V
Oxidation reactions usually result in the formation of the nitrate ion, with nitrogen in oxidation state +5. For example, oxidation with permanganate ion can be used for quantitative analysis of nitrite (by titration):
5 + 2 + 6 H+ → 5 + 2 Mn2+ + 3 H2O
The product of reduction reactions with nitrite ion are varied, depending on the reducing agent used and its strength. With sulfur dioxide, the products are NO and N2O; with tin(II) (Sn2+) the product is hyponitrous acid (H2N2O2); reduction all the way to ammonia (NH3) occurs with hydrogen sulfide. With the hydrazinium cation () the product of nitrite reduction is hydrazoic acid (HN3), an unstable and explosive compound:
HNO2 + → HN3 + H2O + H3O+
which can also further react with nitrite:
HNO2 + HN3 → N2O + N2 + H2O
This reaction is unusual in that it involves compounds with nitrogen in four different oxidation states.
Analysis of nitrite
Nitrite is detected and analyzed by the Griess Reaction, involving the formation of a deep red-colored azo dye upon treatment of a -containing sample with sulfanilic acid and naphthyl-1-amine in the presence of acid.
Coordination complexes
Nitrite is an ambidentate ligand and can form a wide variety of coordination complexes by binding to metal ions in several ways. Two examples are the red nitrito complex [Co(NH3)5(ONO)]2+ is metastable, isomerizing to the yellow nitro complex [Co(NH3)5(NO2)]2+. Nitrite is processed by several enzymes, all of which utilize coordination complexes.
Biochemistry
In nitrification, ammonium is converted to nitrite. Important species include Nitrosomonas. Other bacterial species such as Nitrobacter, are responsible for the oxidation of the nitrite into nitrate.
Nitrite can be reduced to nitric oxide or ammonia by many species of bacteria. Under hypoxic conditions, nitrite may release nitric oxide, which causes potent vasodilation. Several mechanisms for nitrite conversion to NO have been described, including enzymatic reduction by xanthine oxidoreductase, nitrite reductase, and NO synthase (NOS), as well as nonenzymatic acidic disproportionation reactions.
Uses
Chemical precursor
Azo dyes and other colorants are prepared by the process called diazotization, which requires nitrite.
Nitrite in food preservation and biochemistry
The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of Clostridium botulinum.
Sodium nitrite is used to speed up the curing of meat and also impart an attractive colour. On the other hand, a 2018 study by the British Meat Producers Association determined that legally permitted levels of nitrite do not affect the growth of C. botulinum. In the U.S., meat cannot be labeled as "cured" without the addition of nitrite. In some countries, cured-meat products are manufactured without nitrate or nitrite, and without nitrite from vegetable sources. Parma ham, produced without nitrite since 1993, was reported in 2018 to have caused no cases of botulism.
In mice, food rich in nitrites together with unsaturated fats can prevent hypertension by forming nitro fatty acids that inhibit soluble epoxide hydrolase, which is one explanation for the apparent health effect of the Mediterranean diet. Adding nitrites to meat has been shown to generate known carcinogens; the World Health Organization (WHO) advises that eating of nitrite processed meat a day would raise the risk of getting bowel cancer by 18% over a lifetime.
The recommended maximum limits by the World Health Organization in drinking water are 3 mg L−1 and 50 mg L−1 for nitrite and nitrate ions, respectively. Ingesting too much nitrite and/or nitrate through well water is suspected to cause methemoglobinemia.
95% of the nitrite ingested in modern diets comes from bacterial conversion of nitrates naturally found in vegetables. However, potentially cancer-causing nitroso compounds are not made in the pH-neutral colon. They are mostly made in the acidic stomach.
Curing of meat
Nitrite reacts with the meat's myoglobin by attaching to the heme iron atom, forming reddish-brown nitrosomyoglobin and the characteristic pink "fresh" color of nitrosohemochrome or nitrosyl-heme upon cooking. In the US, nitrite has been formally used since 1925. According to scientists working for the industry group American Meat Institute, this use of nitrite started in the Middle Ages. Historians and epidemiologists argue that the widespread use of nitrite in meat-curing is closely linked to the development of industrial meat-processing. French investigative journalist asserts that the meat industry chooses to cure its meats with nitrite even though it is established that this chemical gives rise to cancer-causing nitroso-compounds. Some traditional and artisanal producers avoid nitrites.
Addition of ascorbic acid, erythorbic acid, or one of their salts enhance the binding of nitrite to the iron atom in myoglobin. These chemicals also reduce the formation of nitrosamine in the stomach, but only when the fat content of a meal is less than 10%, beyond which they instead increase the formation of nitrosamine.
Antidote for cyanide poisoning
Nitrites in the form of sodium nitrite and amyl nitrite are components of many cyanide antidote kits. Both of these compounds bind to hemoglobin and oxidize the Fe2+ ions to Fe3+ ions forming methemoglobin. Methemoglobin, in turn, binds to cyanide (CN), creating cyanmethemoglobin, effectively removing cyanide from the complex IV of the electron transport chain (ETC) in mitochondria, which is the primary site of disruption caused by cyanide. Another mechanism by which nitrites help treat cyanide toxicity is the generation of nitric oxide (NO). NO displaces the CN from the cytochrome c oxidase (ETC complex IV), making it available for methemoglobin to bind.
Organic nitrites
In organic chemistry, alkyl nitrites are esters of nitrous acid and contain the nitrosoxy functional group. Nitro compounds contain the C–NO2 group. Nitrites have the general formula RONO, where R is an aryl or alkyl group. Amyl nitrite and other alkyl nitrites have a vasodilating action and must be handled in the laboratory with caution. They are sometimes used in medicine for the treatment of heart diseases. A classic named reaction for the synthesis of alkyl nitrites is the Meyer synthesis in which alkyl halides react with metallic nitrites to a mixture to nitroalkanes and nitrites.
Safety
Nitrite salts can react with secondary amines to produce N-nitrosamines, which are suspected of causing stomach cancer. The World Health Organization (WHO) advises that each of processed meat eaten a day would raise the risk of getting bowel cancer by 18% over a lifetime; processed meat refers to meat that has been transformed through fermentation, nitrite curing, salting, smoking, or other processes to enhance flavor or improve preservation. The World Health Organization's review of more than 400 studies concluded in 2015 that there was sufficient evidence that processed meats caused cancer, particularly colon cancer; the WHO's International Agency for Research on Cancer (IARC) classified processed meats as carcinogenic to humans (Group 1).
Nitrite (ingested) under conditions that result in endogenous nitrosation, specifically the production of nitrosamine, has been classified as Probably carcinogenic to humans (Group 2A) by the IARC.
| Physical sciences | Nitric oxyanions | Chemistry |
379330 | https://en.wikipedia.org/wiki/Fighter-bomber | Fighter-bomber | A fighter-bomber is a fighter aircraft that has been modified, or used primarily, as a light bomber or attack aircraft. It differs from bomber and attack aircraft primarily in its origins, as a fighter that has been adapted into other roles, whereas bombers and attack aircraft are developed specifically for bombing and attack roles.
Although still used, the term fighter-bomber has less significance since the introduction of rockets and guided missiles into aerial warfare. Modern aircraft with similar duties are now typically called multirole combat aircraft or strike fighters.
Development
Prior to World War II, general limitations in available engine and aeronautical technology required that each proposed military aircraft have its design tailored to a specific prescribed role. Engine power grew dramatically during the early period of the war, roughly doubling between 1939 and 1943. The Bristol Blenheim, a typical light bomber of the opening stages of the war, was originally designed in 1934 as a fast civil transport to meet a challenge by Lord Rothermere, owner of the Daily Mail. It had two Bristol Mercury XV radial engines of each, a crew of three, and its payload was just of bombs. The Blenheim suffered disastrous losses over France in 1939 when it encountered Messerschmitt Bf 109s, and light bombers were quickly withdrawn.
In contrast, the Vought F4U Corsair fighter—which entered service in December 1942—had in common with its eventual U.S. Navy stablemate, the Grumman F6F Hellcat and the massive, seven-ton USAAF Republic P-47 Thunderbolt—a single Pratt & Whitney R-2800 Double Wasp radial engine of in a much smaller, simpler and less expensive single-seat aircraft, and was the first aircraft design to ever fly with the Double Wasp engine in May 1940. With less airframe and crew to lift, the Corsair's ordnance load was either four High Velocity Aircraft Rockets or of bombs; a later version could carry eight rockets or of bombs. The massive, powerful 18-cylinder Double Wasp engine weighed almost a ton—half as much again as the V12 Rolls-Royce Merlin and twice as much as the 9-cylinder Bristol Mercury that powered some heavy fighters.
Increased engine power meant that many existing fighter designs could carry useful bomb loads, and adapt to the fighter-bomber role. Notable examples include the Focke-Wulf Fw 190, Hawker Typhoon and Republic P-47 Thunderbolt. Various bombing tactics and techniques could also be used: some designs were intended for high-level bombing, others for the low-level semi-horizontal bombing, or even for low-level steep dive bombing as exemplified by the Blackburn Skua and North American A-36 Apache.
Larger twin-engined aircraft were also used in the fighter-bomber role, especially where longer ranges were needed for naval strikes. Examples include the Lockheed P-38 Lightning, the Bristol Beaufighter (developed from a torpedo bomber), and de Havilland Mosquito (developed from an unarmed fast bomber). The Beaufighter MkV had a Boulton-Paul turret with four 0.303 in (7.7 mm) machine guns mounted aft of the cockpit but only two were built. Bristol's Blenheim was even pushed into service as a fighter during the Battle of Britain but it was not fast enough. Equipped with an early Airborne Interception (AI) radar set, however, it proved to be an effective night fighter.
First World War
The first single-seat fighters to drop bombs were on the Western Front, when fighter patrols were issued with bombs and ordered to drop them at random if they met no German fighters. The Sopwith Camel, the most successful Allied aircraft of the First World War with 1,294 enemy aircraft downed, was losing its edge by 1918, especially over . During the final German offensive in March 1918, it dropped Cooper bombs on advancing columns: whilst puny by later standards, the four fragmentation bombs carried by a Camel could cause serious injuries to exposed troops. Pilot casualties were also high. The Royal Aircraft Factory S.E.5. was used in the same role.
The Royal Flying Corps received the first purpose-built fighter-bomber just as the war was ending. It was not called a fighter bomber at the time, but a Trench Fighter as that was what it was designed to attack. The Sopwith Salamander was based on the Sopwith Snipe fighter but had armour plating in the nose to protect the pilot and fuel system from ground fire. Originally it was intended to have two machine guns jutting through the cockpit floor so as to spray trenches with bullets as it passed low overhead. But this did not work and it was fitted with four Cooper bombs, instead. It was ordered in very large numbers, but most were canceled after the Armistice.
In February and April 1918 the Royal Flying Corps conducted bombing tests at Orfordness, Suffolk dropping dummy bombs at various dive angles at a flag stuck into a shingle beach. Both WW1 fighter bombers were used with novice and experienced pilots. The best results were achieved with a vertical dive into the wind using the Aldis Sight to align the aircraft. But they were not considered good enough to justify the expected casualty rate.
Second World War
When war broke out in Europe, Western Allied Air Forces employed light twin-engined bombers in the tactical role for low-level attacks. These were found to be extremely vulnerable both to ground fire and to single-engine fighters. The German and Japanese Air Forces had chosen dive bombers which were similarly vulnerable. The Ilyushin Il-2 is a heavily armoured two-seat single-engine ground-attack aircraft. It first flew a month later although few had reached the Soviet Air Force in time for Operation Barbarossa. Naval forces chose both torpedo and dive bombers. None of these could be considered as fighter bombers as they could not combat fighters.
Germany
During the Battle of Britain, the Luftwaffe conducted fighter-bomber attacks on the United Kingdom from September to December 1940. A larger fighter-bomber campaign was conducted against the UK from March 1942 until June 1943. These operations were successful in tying down Allied resources at a relatively low cost to the Luftwaffe, but the British Government regarded the campaign as a nuisance given the small scale of the individual raids.
In August 1941, RAF pilots reported encountering a very fast radial engine fighter over France. First thought to be captured French Curtiss 75 Mohawks, they turned out to be Focke-Wulf Fw 190s, slightly faster and more heavily armed than the current Spitfire V. Kurt Tank had designed the aircraft when the Spitfire and Bf 109 were the fastest fighters flying; he called them racehorses, fast but fragile. As a former World War I cavalryman, Tank chose to design a warhorse. With a BMW 801 radial engine, wide-set undercarriage, and two 20mm cannons as well as machine guns it became a better fighter-bomber than either of the pure fighters.
By mid-1942, the first of these "Jagdbombers" (literally "fighter" or "hunter" bomber, known for short as "Jabos") was operating over Kent. On October 31, 60 Fw 190s bombed Canterbury with only one aircraft lost, killing 32 civilians and injuring 116, in the largest raid since the Blitz. Flying at sea level, under the radar, these raids were hard to intercept. The Jabos reached the Eastern Front in time to bomb Russian positions in Stalingrad. By July 1943 Fw 190s were replacing the vulnerable Stukas over the Battle of Kursk: although winning the air war, they were unable to prevent subsequent Red Army advances.
On New Year's Day 1945 in Operation Bodenplatte, over 1,000 aircraft (including more than 600 Fw 190s) launched a last-ditch attempt to destroy Allied planes on the ground in support of the Battle of the Bulge. Allied fighter aircraft and fighter-bomber losses were downplayed, at the time. Seventeen airfields were targeted, of which seven lost many aircraft. The surprise was complete as the few Ultra intercepts had not been understood. At the worst hit, the Canadian base at Eindhoven, 26 Typhoons and 6 Spitfires were destroyed and another 30 Typhoons damaged. In total, 305 aircraft, mostly fighters, and fighter-bombers were destroyed and another 190 damaged. The Luftwaffe lost 143 pilots killed, 71 captured and 20 wounded, making the worst one-day loss in its history; it never recovered.
United Kingdom
The Bristol Blenheim and Douglas A-20 Havoc (which the RAF called Boston) were used as night fighters during the Blitz, as they could carry the heavy early airborne radars.
The Hawker Henley, a two-seat version of the Battle of Britain-winning Hawker Hurricane, was designed as a dive bomber. It might have proved to be a capable fighter-bomber but overheating of its Rolls-Royce Merlin engine in this installation led to its relegation to a target tug role, where it could match the speed of the German bombers whilst towing a drone.
In 1934, the British Air Ministry called for a carrier aircraft that could combine the roles of the dive bomber and fighter, to save limited space on small carriers. The Blackburn Skua was not expected to encounter land-based fighters but was to intercept long-range bombers attacking the fleet and also to sink ships. As a two-seater, it could not fight the Messerschmitt Bf 109 on equal terms. But the second seat carried a radio operator with a homing device that could find the carrier even when it had moved, in foul North Sea weather. It achieved one of the first kills of the war, when three from HMS Ark Royal downed a German Dornier Do 18 flying boat over the North Sea.
On April 10, 1940, 16 Skuas operating from RNAS Hatston in Orkney under Commander William Lucy sank the German cruiser Königsberg which was tied to a mole in Bergen harbour. The Germans recorded five hits or near misses and as the ship started to sink, electric power failed, dooming the ship. The German cruiser Köln had departed during the night.
With the failing of the Hawker Henley and the gradual fading of the Hawker Hurricane's performance compared to the latest German fighters, it was modified to carry four 20mm cannon and two bombs; once bombs were jettisoned the aircraft could put up a reasonable fight. Inevitably the type became known in the RAF as the “Hurribomber”, reaching squadrons in June 1941.
It was soon found that it was hardly possible to hit fast-moving Panzers in the Western Desert, with bombs and cannon fire-making little impact on their armour. Daylight bombing raids were made on the French and Belgian coasts, targeting mostly oil and gas works. Losses were heavy, often more than the numbers of enemy fighters destroyed. By May 1942 Hurricane IICs with drop tanks were intruding at night over France. On the night of May 4–5, Czech pilot Karel Kuttelwascher flying from RAF Tangmere with No 1 Squadron shot down three Dornier Do 17s as they slowed to land at Saint-André-de-Bohon after raiding England.
On September 25, 1942, the Gestapo HQ in Oslo was attacked by four de Havilland Mosquitoes, which had flown over the North Sea below by dead reckoning navigation from RAF Leuchars, Scotland, carrying four bombs each. The next day the RAF unveiled its new fast bomber. On December 31, 1944, the same aircraft was used against the same target, this time from RAF Peterhead in Scotland, flying high and diving onto the building. In February 1941 the Mosquito with two Rolls-Royce Merlin engines and a streamlined wooden fuselage achieved , faster than the current Spitfire. It was used on all kinds of missions, including silencing Hermann Göring's Berlin Nazi anniversary broadcast on January 20, 1943, leading him to tell Erhard Milch, Air Inspector General that “when I see the Mosquito I am yellow and green with envy. (The British) have the geniuses and we have the nincompoops.”
Initially used for high-level photo-reconnaissance, the Mosquito was adapted to precision bombing, night fighter, and fighter bomber roles. It was built in Canada and Australia as well as the UK. Fitted with a British Army Ordnance QF 6 pounder (57 mm) gun it could sink U-boats found on the surface. On April 9, 1945, three were sunk en route to Norway, and in the following month, Mosquitos sank two more.
The Hawker Typhoon was being designed as a replacement for the Hurricane in March 1937 before production had even started. The reason was to take advantage of the new engines then being planned, either the Napier Sabre or Rolls-Royce Vulture which required a larger airframe than the nimble Hurricane. At the prototype stage, there were problems with the new engines and stability of the aircraft itself, which led the Minister of Aircraft Production, Lord Beaverbrook to decree that production must focus on Spitfires and Hurricanes.
The Typhoon disappointed as a fighter, especially at altitude but found its true niche as a fighter bomber from September 1942. It was fitted with racks to carry two and then two bombs. By September 1943 it was fitted with eight RP-3 rockets each with a warhead, equivalent to the power of a naval destroyer's broadside.
Claims of German tanks destroyed by rocket-armed Typhoons in Normandy after D-Day were exaggerated. In Operation Goodwood, the attempt by British and Canadian forces to surround Caen of 75 tanks recorded as lost by the Germans, only 10 were found to be due to rocket-firing Typhoons.
At Mortain, where the German counter-offensive Operation Lüttich came within of cutting through US forces to Avranches, Typhoons destroyed 9 of 46 tanks lost but were more effective against unarmoured vehicles and troops and cause the armoured vehicles to seek cover. General Dwight D. Eisenhower, the Supreme Allied Commander, said "The chief credit in smashing the enemy's spearhead, however, must go to the rocket-firing Typhoon aircraft of the Second Tactical Air Force. The result of the strafing was that the enemy attack was effectively brought to a halt, and a threat was turned into a great victory".
The disparity between claims and actual destruction at about 25-1 owed much to the difficulty of hitting a fast-moving tank with an unguided rocket, even from a stable aircraft like the Typhoon. But soft targets were simpler. When the 51st Highland Division moved to block German panzers reaching Antwerp in the Battle of the Bulge Tommy Macpherson saw a half-track full of SS soldiers. All were uninjured, powerful men over tall. All were dead, killed by the air blast from a Typhoon rocket.
The Bristol Beaufighter was a long-range twin-engine heavy fighter derived from the Bristol Beaufort torpedo bomber but with the Bristol Hercules radial engine to give it a top speed faster. By late 1942 the Beaufighter was also capable of carrying torpedoes or rockets. The main user was RAF Coastal Command although it was also used in the Royal Australian Air Force with some aircraft assembled in Australia and by the USAAF.
Over 30 Beaufighters flying from RAF Dallachy in Scotland from Australian, British, Canadian, and New Zealand squadrons attacked the German destroyer Z33 sheltering in Førde Fjord Norway. They were escorted by only 10 to 12 North American P-51 Mustangs. German destroyers escorted convoys of Swedish iron ore, which in winter were forced to creep along the Atlantic Coast by night, hiding deep inside fjords by day. Z33 was moored close to the vertical cliffside of the fjords so Beaufighters had to attack singly with rockets without the normal tactic of having simultaneous attacks by other Beaufighters firing cannon at the numerous flak gunners. Twelve Focke-Wulf Fw 190s surprised the Mustangs and Norway's biggest ever air battle was soon raging. Nine Beaufighters and one Mustang were lost as were five Fw 190s. The destroyer was damaged and February 9, 1945, became known as Black Friday.
Typhoons were involved in one of the worst tragedies at the end of the war when four squadrons attacked the luxury liners SS Deutschland and the SS Cap Arcona and two smaller ships SS Athen and SS Thielbek moored off Neustadt in Lübeck Bay The Cap Arcona had 4,500 concentration camp inmates and the Thielbek another 2,800 as well as SS Guards. The Deutschland had a Red Cross flag painted on at least one funnel. The previous day the Captain of the Cap Arcona refused to take any more inmates on board. On return to shore in longboats they were gunned down by Hitler Jugend, SS Guards and German Marines. Of an estimated 14,500 victims in the area two days earlier only 1,450 survived.
The Hawker Tempest was a development of the Typhoon using the thin wing with an aerofoil developed by NACA and a more powerful version of the Napier Sabre engine, giving a top speed of . At a low level, it was faster than any other Allied or German aircraft, but slower than the Spitfire above . Fitted with four 20mm cannon it was a formidable fighter, respected even by Messerschmitt Me 262 jet fighter pilots as their most dangerous opponent. At its debut over the Normandy Beaches on D-Day +2, Tempests shot down three German fighters, without loss. Tempests supported the ambitious attempt to capture the bridge at Arnhem in Operation Market Garden in mid-September 1944. David C. Fairbanks, an American who joined the Royal Canadian Air Force was the top Tempest ace with 12 victories including an Arado Ar 234 jet bomber.
United States
General Henry H. Arnold, Chief of the United States Army Air Forces, urged the adoption of the Mosquito by the U.S. but was overruled by those who felt that the as yet untried Lockheed P-38 Lightning also twin-engined, could fulfill the same role. Although Lightning got its name from the RAF, the British eventually rejected it. Too slow and cumbersome to match Bf 109s as an escort fighter over Germany, it did fly over Normandy as a fighter bomber, where one tried skip-bombing a bomb through the door of Field Marshal Günther von Kluge's OB West HQ. A Lightning squadron also killed Admiral Isoroku Yamamoto over Bougainville in the Pacific acting on an Ultra intercept.
The Republic P-47 Thunderbolt was a larger, evolutionary development of the P-43/P-44 fighter undertaken after the United States Army Air Forces observed Messerschmitt Bf 109s performing in the Battle of Britain. It was a massive aircraft built around the powerful Pratt & Whitney R-2800 Double Wasp engine and weighed up to eight tons with ordnance. The P-47 was twice as heavy and had four times the fuselage size of a Spitfire. Armed with eight .50 in (12.7 mm) M2 Browning machine guns it could outshoot any enemy fighter, and as a fighter-bomber, it could carry half the bomb load of a Boeing B-17 Flying Fortress or 10 five-inch (127 mm) High Velocity Aircraft Rockets.
The first pilots to fly the Thunderbolt from England were Americans who had been flying Spitfires in the RAF before the U.S. joined the war. They were not impressed initially; the Thunderbolt lost out to the more nimble Spitfire so consistently in mock dogfights that these encounters were eventually banned. But by November 25, 1943 Thunderbolts had found their true niche, attacking a Luftwaffe airfield at Saint-Omer near Calais, France. On October 13, 1944, a Thunderbolt from 9th Air Force damaged the German Torpedoboot Ausland 38 (formerly the Italian 750 ton torpedo boat Spada) so badly near Trieste with gunfire alone that the ship was scuttled.
The Vought F4U Corsair was built around the same Pratt & Whitney R-2800 Double Wasp engine as the Thunderbolt, but for the U.S. Navy. Difficulties with carrier landings meant that the first aircraft were used by the United States Marine Corps from Henderson Field, Guadalcanal from February 12, 1943. In its first combat action, the following day over Kahili airfield two Corsairs and eight other aircraft were lost when attacked by 50 Mitsubishi A6M Zeros. This became known as the St Valentine's Day massacre. Despite this initiation the Corsair soon proved to be an effective fighter bomber, mostly flown by the Marine Corps, but also by the United States Navy, Fleet Air Arm and Royal New Zealand Air Force in the Pacific theater.
When the British Purchasing Commission invited James H. Kindelberger, President of North American Aviation, to assemble the Curtiss P-40 Warhawk in an underutilized plant, he promised a better fighter on the same timing. The resulting North American P-51 Mustang powered by a Packard-built Rolls-Royce Merlin engine became the outstanding long-range fighter of the war. When Lend-lease funding for the RAF Mustangs was exhausted, Kindleberger tried to interest the USAAC but no funds were available for a fighter; instead, the Mustang was fitted with dive brakes and emerged as the North American A-36 Apache, a dive bomber almost as fast as the Mustang itself. By April 1943 USAAF Apaches were in Morocco supporting Operation Torch, and they continued bombing trains and gun emplacements northwards through Italy.
Korean War
When Soviet-backed North Korea attacked South Korea on June 25, 1950, their forces quickly routed the South Korean army which lacked tanks, anti-tank and heavy artillery. Its Air Force had 22 planes, none of which were fighters, or jets. During a Soviet boycott of the United Nations, a vote was carried without Soviet veto, to intervene in support of the South. Most readily available were U.S. and British Commonwealth forces occupying Japan and the Pacific fleets. The first arrivals were fighter-bombers, which helped to repulse the Northern attack on the vital port of Pusan, the last small territory held by the South. Some strategists felt that air and battleship strikes alone could halt the invasion.
USAF North American F-82 Twin Mustangs had the range to reach the front line from Japanese bases. The last piston-engined aircraft, produced in the U.S., it looked like two Mustangs, with two pilots in separate fuselages, bolted together. Initially intended to escort bombers over Japan from remote Pacific island bases, hence its long-range, it missed WWII and first saw action in Korea. Plain North American P-51 Mustangs of the Royal Australian Air Force soon also flew across from Japan.
Vought F4U Corsairs and Hawker Sea Furys from U.S., British and Australian carriers in the Yellow Sea and later from Korean airfields, also attacked the Pusan perimeter. The Sea Fury, a development of the Hawker Tempest had a Bristol Centaurus engine of giving a top speed, one of the fastest piston-engined aircraft ever built. Initially, United Nations air forces using piston-engined fighter-bombers and straight wing jet fighters easily drove the North Koreans out of the sky and so disrupted logistics and hence the attack on Pusan.
All changed when the Soviet Air Force intervened with swept-wing Mikoyan-Gurevich MiG-15s flown by Russian pilots on November 1. The planes had Korean markings and the pilots had been taught a few Korean words, in a thin sham that the USSR was not fighting. The MiG-15 used captured German swept wing technology and tools and British jet engines, 25 of which had been a gift from Stafford Cripps the president of the Board of Trade and were quickly copied. Josef Stalin remarked “What fool will sell us his secrets?” The MiG's Rolls-Royce Nene had thrust, twice as much as the jets of its main British and US opponents, which used the older Rolls-Royce Derwent design. Only the Navy Grumman F9F Panther used a version of the Nene and could match the MiG-15, accounting for seven during November.
Daylight heavy bomber raids over North Korea ceased and the Lockheed F-80 Shooting Star and its all-weather variant the Lockheed F-94 Starfire were focused on bombing missions whilst the North American F-86 Sabre was rushed to Korea to combat the MiG-15s. There is much debate as to which was the better fighter. Recent research suggests a 13-10 advantage to the Sabre against Russian pilots, but the US pilots were mostly WWII veterans whilst the Russians were often “volunteers” with only a few hours aloft. The Australians converted from Mustangs to Gloster Meteor fighter-bombers, the first Allied jet fighter of WWII but no match for a MiG-15. It was pressed into combat but after four were lost when the squadron was bounced by 40 Mig-15s, reverted to ground attack, carrying 16 rockets. Although Meteors shot down 6 MiG-15s, 30 were lost, but mainly to ground fire. Both Corsairs and Sea Furies also shot down MiG-15s, but were vulnerable to the faster jet.
Post-war
Fighter-bombers became increasingly important in the 1950s and 1960s, as new jet engines dramatically improved the power of even the smallest fighter designs. Many aircraft initially designed as fighters or interceptors found themselves in the fighter-bomber role at some point in their career. Notable among these is the Lockheed F-104 Starfighter, first designed as a high-performance day fighter and then adapted to the nuclear strike role for European use. Other U.S. examples include the North American F-100 Super Sabre and the McDonnell Douglas F-4 Phantom II, each of which was widely used during the Vietnam War. An example of a modern purpose-designed fighter bomber is the Sukhoi Su-34.
| Technology | Military aviation | null |
1659969 | https://en.wikipedia.org/wiki/White%20phosphorus%20munition | White phosphorus munition | White phosphorus munitions are weapons that use one of the common allotropes of the chemical element phosphorus. White phosphorus is used in smoke, illumination, and incendiary munitions, and is commonly the burning element of tracer ammunition. Other common names for white phosphorus munitions include WP and the slang terms Willie Pete and Willie Peter, which are derived from William Peter, the World War II phonetic alphabet rendering of the letters WP. White phosphorus is pyrophoric (it is ignited by contact with air); burns fiercely; and can ignite cloth, fuel, ammunition, and other combustibles.
White phosphorus is a highly efficient smoke-producing agent, reacting with air to produce an immediate blanket of phosphorus pentoxide vapour. Smoke-producing white phosphorus munitions are very common, particularly as smoke grenades for infantry, loaded in defensive grenade launchers on tanks and other armoured vehicles, and in the ammunition allotment for artillery and mortars. These create smoke screens to mask friendly forces' movement, position, infrared signatures, and shooting positions. They are often called smoke/marker rounds for their use in marking points of interest, such as a light mortar to designate a target for artillery spotters.
History
Early use
White phosphorus was used by Fenian (Irish nationalist) arsonists in the 19th century in a formulation that became known as "Fenian fire". The phosphorus would be in a solution of carbon disulfide; when the carbon disulfide evaporates, the phosphorus bursts into flames. The same formula was also used in arson in Australia.
World War I, the inter-war period and World War II
The British Army introduced the first factory-built white phosphorus grenades in late 1916 during the First World War. During the war, white phosphorus mortar bombs, shells, rockets, and grenades were used extensively by American, Commonwealth, and, to a lesser extent, Japanese forces, in both smoke-generating and antipersonnel roles. The Royal Air Force based in Iraq also used white phosphorus bombs in Anbar Province during the Iraqi revolt of 1920.
Among the many social groups protesting the war and conscription at the time, at least one, the Industrial Workers of the World in Australia, used Fenian fire.
In the interwar years, the US Army trained using white phosphorus, by artillery shell and air bombardment.
In 1940, when the German invasion of Great Britain seemed imminent, the phosphorus firm of Albright and Wilson suggested that the British government use a material similar to Fenian fire in several expedient incendiary weapons. The only one fielded was the Grenade, No. 76 or Special Incendiary Phosphorus grenade, which consisted of a glass bottle filled with a mixture similar to Fenian fire, plus some latex. It came in two versions, one with a red cap intended to be thrown by hand, and a slightly stronger bottle with a green cap, intended to be launched from the Northover projector, a crude launcher using black powder as a propellant. These were improvised anti-tank weapons, hastily fielded in 1940 when the British were awaiting a potential German invasion after losing the bulk of their modern armaments in the Dunkirk evacuation.
At the start of the Normandy campaign, 20% of American 81 mm mortar ammunition consisted of M57 point-detonating bursting smoke rounds using WP filler. At least five American Medal of Honor citations mention their recipients using M15 white phosphorus hand grenades to clear enemy positions, and in the 1944 liberation of Cherbourg alone, a single US mortar battalion, the 87th, fired 11,899 white phosphorus rounds into the city. The US Army and Marines used M2 and M328 WP shells in mortars. White phosphorus was widely used by Allied soldiers for breaking up German attacks and creating havoc among enemy troop concentrations during the latter part of the war.
US Sherman tanks carried the M64, a 75mm white phosphorus round intended for screening and artillery spotting, but tank crews found it useful against German tanks such as the Panther that their APC ammunition could not penetrate at long range. Smoke from rounds fired directly at German tanks would be used to blind them, allowing the Shermans to close to a range where their armour-piercing rounds were effective. In addition, due to the turret ventilation systems sucking in fumes, German crews would sometimes be forced to abandon their vehicle: this proved particularly effective against inexperienced crews who, on seeing smoke inside the turret, would assume their tank had caught fire. Smoke was also used for "silhouetting" enemy vehicles, with rounds dropped behind them to produce a better contrast for gunnery.
Later 20th century uses
White phosphorus munitions were used extensively by US forces in Vietnam and by Russian forces in the First Chechen War and Second Chechen War. White phosphorus grenades were used by the US in Vietnam to destroy Viet Cong tunnel complexes as they would burn up all oxygen and suffocate the enemy soldiers sheltering inside. British soldiers also made extensive use of white phosphorus grenades during the Falklands War to clear out Argentine positions as the peaty soil they were constructed on tended to lessen the impact of fragmentation grenades.
Use by US forces in Iraq
In November 2004, during the Second Battle of Fallujah, Washington Post reporters embedded with Task Force 2-2, Regimental Combat Team 7 stated that they witnessed artillery guns firing white phosphorus projectiles, which "create a screen of fire that cannot be extinguished with water. Insurgents reported being attacked with a substance that melted their skin, a reaction consistent with white phosphorous burns." The same article also reported, "The corpses of the mujaheddin which we received were burned, and some corpses were melted." The March/April 2005 issue of an official Army publication called Field Artillery Magazine reported that "White phosphorus proved to be an effective and versatile munition and a potent psychological weapon against the insurgents in trench lines and spider holes. ... We fired 'shake and bake' missions at the insurgents using W.P. [white phosphorus] to flush them out and H.E. [high explosives] to take them out".
The documentary Fallujah, The Hidden Massacre, produced by RAI TV and released 8 November 2005, showed video and photos that they claimed to be of Fallujah combatants and also civilians, including women and children, who had died of burns caused by white phosphorus during the Second Battle of Fallujah.
On 15 November 2005, following denials to the press from the US ambassadors in London and Rome, the US Department of Defense confirmed that US forces had used white phosphorus as an incendiary weapon in Fallujah, in order to drive combatants out of dug-in positions. On 22 November 2005, the Iraqi government stated it would investigate the use of white phosphorus in the battle of Fallujah. On 30 November 2005, the BBC quoted US General Peter Pace saying "It [WP munitions] is not a chemical weapon. It is an incendiary. And it is well within the law of war to use those weapons as they're being used, for marking and for screening." Professor Paul Rodgers from the University of Bradford department of peace and conflict studies said that white phosphorus would probably fall into the category of chemical weapons if it was used directly against people.
Use by Israeli forces in Lebanon
2006 Lebanon War
During the 2006 Lebanon War, Israel said that it had used phosphorus shells "against military targets in open ground" in Southern Lebanon. Israel said that its use of these munitions was permitted under international conventions.
However, President of Lebanon Émile Lahoud said that phosphorus shells were used against civilians. The first Lebanese official complaint about the use of phosphorus came from Information Minister Ghazi Aridi.
2023 Israel–Lebanon border clashes and 2024 invasion
Amnesty International and Human Rights Watch accused Israel of using white phosphorous artillery shells indiscriminately in its attack in Dhayra, Lebanon on October 16, that injured at least nine civilians, and that it was unlawful. Amnesty is investigating this and other potential violations of international humanitarian law by all parties in the region. The claim was confirmed by The Washington Post, which identified two white phosphorus shell casings made in the United States.
By March 6, the National Council for Scientific Research in Lebanon said 117 white phosphorous bombs had been dropped on southern Lebanon. Israel says that they have been using the substance to create a smokescreen on the battlefield; however, it has been alleged that its use was an attempt by Israel to make the land uninhabitable in the future.
According to a confidential report prepared by the government of one of United Nations Interim Force in Lebanon's contributing countries that was reviewed by the Financial Times, on 13 October multiple white phosphorus munitions were fired within 100 metres of a UNIFIL base, injuring 15 peacekeepers, after an incident where Israeli Merkava tanks had broken into the base and stayed for 45 minutes.
Use by Israeli forces in Gaza
In its early statements regarding the Gaza War of 2008–2009, the Israeli military denied using WP entirely, saying "The IDF acts only in accordance with what is permitted by international law and does not use white phosphorus." However, numerous reports from human rights groups during the war indicated that WP shells were being used by Israeli forces in populated areas.
On 5 January 2009, The Times of London reported that telltale smoke associated with white phosphorus had been seen in the vicinity of Israeli shelling. On 12 January, it was reported that more than 50 patients in Nasser Hospital were being treated for phosphorus burns.
On 15 January, the headquarters of the United Nations Relief and Works Agency in Gaza City was struck by IDF white phosphorous artillery shells, setting fire to pallets of relief materials and igniting several large fuel storage tanks. Senior Israeli defense officials maintain that the shelling was in response to Israeli military personnel being fired upon by Hamas fighters who were in proximity to the UN headquarters, and was used for smoke. The soldiers who ordered the attack were later reprimanded for violating the IDF rules of engagement. The IDF further investigated improper use of WP in the conflict, particularly in one incident in which 20 WP shells were fired in a built-up area of Beit Lahiya.
After the Israel Defense Forces had officially denied for months having used white phosphorus during the war, the Israeli government released a report in July 2009 that confirmed that the IDF had used white phosphorus in both exploding munitions and smoke projectiles. The report argues that the use of these munitions was limited to unpopulated areas for marking and signaling and not as an anti-personnel weapon. The Israeli government report further stated that smoke screening projectiles were the majority of the munitions containing white phosphorus employed by the IDF and that these were very effective in that role. The report states that at no time did IDF forces have the objective of inflicting any harm on the civilian population.
Head of the UN Fact Finding Mission Justice Richard Goldstone presented the report of the Mission to the Human Rights Council in Geneva on 29 September 2009. The Goldstone report accepted that white phosphorus is not illegal under international law but did find that the Israelis were "systematically reckless in determining its use in built-up areas". It also called for serious consideration to be given to the banning of its use in built-up areas. The Government of Israel issued an initial response rejecting the findings of the Goldstone report.
The 155mm WP artillery shells used by Israel are typically the American M825A1, a base-ejection shell which deploys an airbursting submunition canister. On detonation of the bursting charge, the canister deploys 116 units , quarter-circle wedges of felt impregnated with of WP, producing a smokescreen lasting 5–10 minutes depending on weather conditions. These submunitions typically land in an elliptical pattern 125–250 meters in diameter, with the size of the effect area depending on the burst height, and produce a smokescreen 10 metres in height.
Afghanistan (2009)
There are confirmed cases of white phosphorus burns on bodies of civilians wounded during US–Taliban clashes near Bagram. The United States has accused Taliban militants of using white phosphorus weapons illegally on at least 44 occasions. On the other hand, in May 2009, Colonel Gregory Julian, a spokesman for General David McKiernan, the overall commander of US and NATO forces in Afghanistan, confirmed that Western military forces in Afghanistan use white phosphorus in order to illuminate targets or as an incendiary to destroy bunkers and enemy equipment. The Afghan government later launched an investigation into the use of white phosphorus munitions.
Syrian Civil War
The Syrian government, the United States, the Russian Federation, and Turkey reportedly deployed white phosphorus munitions via airstrikes and artillery on different occasions during the Syrian Civil War.
Second Nagorno-Karabakh War
During the Second Nagorno-Karabakh War, on 31 October 2020 the Ministry of Defence of the unrecognised Republic of Artsakh stated that the Azerbaijani side had used phosphorus weapons to burn forests near Shusha (Shushi). Atlantic Council's Digital Forensic Research Lab (DFRLab) found OSINT evidence supporting these claims.
The Azerbaijani authorities, in turn, accused the Armenian forces of using white phosphorus on civilian areas. On 20 November, the Prosecutor General's Office of Azerbaijan filed a lawsuit, accusing the Armenian Armed Forces of using phosphorus ammunition in Nagorno-Karabakh, as well as in Tartar District.
Russo-Ukrainian war (Since 2014)
Regulation and application
Uses
White phosphorus ignites when interacting with oxygen, releasing a large amount of smoke during combustion. The military can use the curtain to mask troop movements. However, the chemical characteristics of the substance make phosphorus bombs especially dangerous: the burning temperature of phosphorus is 800–2500 °C; it sticks to various surfaces, including skin and clothes; the burning substance is difficult to extinguish. White phosphorus can cause deep burns down to the bones, and remnants of the substance in the tissues can ignite again after the initial treatment. It is difficult for military doctors, who are usually limited by medical resources, to provide timely and full assistance to the victims. Even burn survivors can die from organ failure due to the toxicity of white phosphorus. In addition, fires caused by incendiary projectiles can destroy civilian buildings and property, and damage crops and livestock. Humanitarian organizations such as Human Rights Watch are calling on governments to include phosphorus warheads under the UN Convention on Certain Conventional Weapons.
Non-governmental international organizations have recorded the military use of white phosphorus in Syria, Afghanistan, the Gaza Strip, and other war zones. Militaries worldwide, including the US military, use white phosphorus for incendiary purposes.
International law
White phosphorus munitions are not banned under international law, but because of their incendiary effects, their use is supposed to be tightly regulated. Because white phosphorus has legal uses, shells filled with it are not directly prohibited by international humanitarian law. Experts consider them not as incendiary, but as masking, since their main goal is to create a smoke screen.
While in general white phosphorus is not subject to restriction, certain uses in weaponry are banned or restricted by general international laws: in particular, those related to incendiary devices. Article 1 of Protocol III of the Convention on Certain Conventional Weapons defines an incendiary weapon as "any weapon or munition which is primarily designed to set fire to objects or to cause burn injury to persons through the action of flame, heat, or combination thereof, produced by a chemical reaction of a substance delivered on the target". Article 2 of the same protocol prohibits the deliberate use of incendiary weapons against civilian targets (already forbidden by the Geneva Conventions), the use of air-delivered incendiary weapons against military targets in civilian areas, and the general use of other types of incendiary weapons against military targets located within "concentrations of civilians" without taking all possible means to minimise casualties. Incendiary phosphorus bombs may also not be used near civilians in a way that can lead to indiscriminate civilian casualties.
The convention also exempts certain categories of munitions from its definition of incendiary weapons: specifically, these are munitions which "may have incidental incendiary effects, such as illuminants, tracers, smoke or signalling systems" and those "designed to combine penetration, blast or fragmentation effects with an additional incendiary effect."
The use of incendiary and other flame weapons against matériel, including enemy military personnel, is not directly forbidden by any treaty. The United States Military mandates that incendiary weapons, where deployed, not be used "in such a way as to cause unnecessary suffering." The term "unnecessary suffering" is defined through use of a proportionality test, comparing the anticipated military advantage of the weapon's use to the amount of suffering potentially caused.
Chemical weaponry
Despite their danger, the Chemical Weapons Convention does not classify phosphorus bombs as chemical weapons. This convention is meant to prohibit weapons that are "dependent on the use of the toxic properties of chemicals as a method of warfare", and defines a "toxic chemical" as a substance "which through its chemical action on life processes can cause death, temporary incapacitation or permanent harm to humans or animals". An annex lists chemicals that are restricted under the convention, and WP is not listed in the Schedules of chemical weapons or precursors.
In a 2005 interview with RAI, Peter Kaiser, spokesman for the Organisation for the Prohibition of Chemical Weapons (an organisation overseeing the CWC and reporting directly to the UN General Assembly), discussed cases where use of WP would potentially fall under the auspices of the CWC:
Smoke-screening properties
Weight-for-weight, phosphorus is the most effective smoke-screening agent known, for two reasons: first, it absorbs most of the screening mass from the surrounding atmosphere and secondly, the smoke particles are actually an aerosol, a mist of liquid droplets which are close to the ideal range of sizes for Mie scattering of visible light. This effect has been likened to three dimensional textured privacy glass—the smoke cloud does not obstruct an image, but thoroughly scrambles it. It also absorbs infrared radiation, allowing it to defeat thermal imaging systems.
When phosphorus burns in air, it first forms phosphorus pentoxide (which exists as tetraphosphorus decoxide except at very high temperatures):
P4 + 5 O2 → P4O10
However phosphorus pentoxide is extremely hygroscopic and quickly absorbs even minute traces of moisture to form liquid droplets of phosphoric acid:
P4O10 + 6 H2O → 4 H3PO4 (also forms polyphosphoric acids such as pyrophosphoric acid, H4P2O7)
Since an atom of phosphorus has an atomic mass of 31 but a molecule of phosphoric acid has a molecular mass of 98, the cloud is already 68% by mass derived from the atmosphere (i.e., 3.2 kilograms of smoke for every kilogram of WP); however, it may absorb more because phosphoric acid and its variants are hygroscopic. Given time, the droplets will continue to absorb more water, growing larger and more dilute until they reach equilibrium with the local water vapour pressure. In practice, the droplets quickly reach a range of sizes suitable for scattering visible light and then start to dissipate from wind or convection.
Because of the great weight efficiency of WP smoke, it is particularly suited for applications where weight is highly restricted, such as hand grenades and mortar bombs. An additional advantage for hand smoke grenades—which are more likely to be used in an emergency—is that the WP smoke clouds form in a fraction of a second. Because WP is also pyrophoric, most munitions of this type have a simple burster charge to split open the casing and spray fragments of WP through the air, where they ignite spontaneously and leave a trail of rapidly thickening smoke behind each particle. The appearance of this cloud forming is easily recognised; one sees a shower of burning particles spraying outward, followed closely by distinctive streamers of white smoke, which rapidly coalesce into a fluffy, very pure white cloud (unless illuminated by a coloured light source).
Various disadvantages of WP are discussed below, but one which is particular to smoke-screening is "pillaring". Because the WP smoke is formed from fairly hot combustion, the gasses in the cloud are hot, and tend to rise. Consequently, the smoke screen tends to rise off the ground relatively quickly and form aerial "pillars" of smoke which are of little use for screening. Tactically this may be counteracted by using WP to get a screen quickly, but then following up with emission type screening agents for a more persistent screen. Some countries have begun using red phosphorus instead. Red phosphorus ("RP") burns cooler than WP and eliminates a few other disadvantages as well, but offers exactly the same weight efficiency. Other approaches include WP soaked felt pads (which also burn more slowly, and pose a reduced risk of incendiarism) and PWP, or plasticised white phosphorus.
Plasticized white phosphorus (PWP)
White phosphorus, when dispersed by the bursting charge, tends to become too finely divided. The reaction then becomes too fast, releases too much heat at once, and the smoke cloud rises up. After series of experiments, in 1944 the NDRC Munitions Development Laboratory at University of Illinois developed a plasticization method. The white phosphorus granules, big about as grains of sand, are coated with GR-S (Government Rubber-Styrene) rubber, gelled with xylene. The resulting rubbery mass does not atomize so readily, gets broken up to several millimeters sized pieces, and burns for several minutes, reducing the pillaring. However, incendiary effects are reduced too, albeit the larger pieces are more effective against enemy troops. One of the disadvantages of PWP is the tendency of phosphorus to separate from the rubber matrix, when stored in hot weather.
Physiological effects
In addition to direct injuries caused by fragments of their casings, white phosphorus munitions can cause injuries in two main ways: burn injuries and vapour inhalation.
Burning
In munitions, white phosphorus burns readily with flames of 800 °C (1,472 °F). Incandescent particles from weapons using powdered white phosphorus as their payload produce extensive partial- and full-thickness burns, as will any attempt to handle burning submunitions without protective equipment. Phosphorus burns carry an increased risk of mortality due to the absorption of phosphorus into the body through the burned area with prolonged contact, which can result in liver, heart and kidney damage, and in some cases multiple organ failure. White phosphorus particles continue to burn until completely consumed or starved of oxygen. In the case of weapons using felt-impregnated submunitions, incomplete combustion may occur resulting in up to 15% of the WP content remaining unburned. Such submunitions can prove hazardous as they are capable of spontaneous re-ignition if crushed by personnel or vehicles. In some cases, injury is limited to areas of exposed skin because the smaller WP particles do not burn completely through personal clothing before being consumed.
Due to the pyrophoric nature of WP, penetrating injuries are immediately treated by smothering the wound using water, damp cloth or mud, isolating it from oxygen until fragments can be removed: military forces will typically do so using a bayonet or knife where able. Bicarbonate solution is applied to the wound to neutralise any build-up of phosphoric acid, followed by removal of any remaining visible fragments: these are easily observed as they are luminescent in dark surroundings. Surgical debridement around the wound is used to avoid fragments too small to detect causing later systemic failure, with further treatment proceeding as with a thermal burn.
Smoke inhalation
Burning white phosphorus produces a hot, dense, white smoke consisting mostly of phosphorus pentoxide in aerosol form. Field concentrations are usually harmless, but at high concentrations the smoke can cause temporary irritation to the eyes, mucous membranes of the nose, and respiratory tract. The smoke is more dangerous in enclosed spaces, where it can cause asphyxiation and permanent respiratory damage. The US Agency for Toxic Substances and Disease Registry has set an acute inhalation Minimum Risk Level (MRL) for white phosphorus smoke of 0.02 mg/m3, the same as fuel-oil fumes. By contrast, the chemical weapon mustard gas is 30 times more potent: 0.0007 mg/m3. The agency cautioned that studies used to determine the MRL were based on extrapolations from animal testing and may not accurately reflect the health risk to humans.
| Technology | Incendiary weapons | null |
1661177 | https://en.wikipedia.org/wiki/Accretion%20%28astrophysics%29 | Accretion (astrophysics) | In astrophysics, accretion is the accumulation of particles into a massive object by gravitationally attracting more matter, typically gaseous matter, into an accretion disk. Most astronomical objects, such as galaxies, stars, and planets, are formed by accretion processes.
Overview
The accretion model that Earth and the other terrestrial planets formed from meteoric material was proposed in 1944 by Otto Schmidt, followed by the protoplanet theory of William McCrea (1960) and finally the capture theory of Michael Woolfson. In 1978, Andrew Prentice resurrected the initial Laplacian ideas about planet formation and developed the modern Laplacian theory. None of these models proved completely successful, and many of the proposed theories were descriptive.
The 1944 accretion model by Otto Schmidt was further developed in a quantitative way in 1969 by Viktor Safronov. He calculated, in detail, the different stages of terrestrial planet formation. Since then, the model has been further developed using intensive numerical simulations to study planetesimal accumulation. It is now accepted that stars form by the gravitational collapse of interstellar gas. Prior to collapse, this gas is mostly in the form of molecular clouds, such as the Orion Nebula. As the cloud collapses, losing potential energy, it heats up, gaining kinetic energy, and the conservation of angular momentum ensures that the cloud forms a flattened disk—the accretion disk.
Accretion of galaxies
A few hundred thousand years after the Big Bang, the Universe cooled to the point where atoms could form. As the Universe continued to expand and cool, the atoms lost enough kinetic energy, and dark matter coalesced sufficiently, to form protogalaxies. As further accretion occurred, galaxies formed. Indirect evidence is widespread. Galaxies grow through mergers and smooth gas accretion. Accretion also occurs inside galaxies, forming stars.
Accretion of stars
Stars are thought to form inside giant clouds of cold molecular hydrogen—giant molecular clouds of roughly and in diameter. Over millions of years, giant molecular clouds are prone to collapse and fragmentation. These fragments then form small, dense cores, which in turn collapse into stars. The cores range in mass from a fraction to several times that of the Sun and are called protostellar (protosolar) nebulae. They possess diameters of and a particle number density of roughly . Compare it with the particle number density of the air at the sea level—.
The initial collapse of a solar-mass protostellar nebula takes around 100,000 years. Every nebula begins with a certain amount of angular momentum. Gas in the central part of the nebula, with relatively low angular momentum, undergoes fast compression and forms a hot hydrostatic (non-contracting) core containing a small fraction of the mass of the original nebula. This core forms the seed of what will become a star. As the collapse continues, conservation of angular momentum dictates that the rotation of the infalling envelope accelerates, which eventually forms a disk.
As the infall of material from the disk continues, the envelope eventually becomes thin and transparent and the young stellar object (YSO) becomes observable, initially in far-infrared light and later in the visible. Around this time the protostar begins to fuse deuterium. If the protostar is sufficiently massive (above ), hydrogen fusion follows. Otherwise, if its mass is too low, the object becomes a brown dwarf. This birth of a new star occurs approximately 100,000 years after the collapse begins. Objects at this stage are known as Class I protostars, which are also called young T Tauri stars, evolved protostars, or young stellar objects. By this time, the forming star has already accreted much of its mass; the total mass of the disk and remaining envelope does not exceed 10–20% of the mass of the central YSO.
At the next stage, the envelope completely disappears, having been gathered up by the disk, and the protostar becomes a classical T Tauri star. The latter have accretion disks and continue to accrete hot gas, which manifests itself by strong emission lines in their spectrum. The former do not possess accretion disks. Classical T Tauri stars evolve into weakly lined T Tauri stars. This happens after about 1 million years. The mass of the disk around a classical T Tauri star is about 1–3% of the stellar mass, and it is accreted at a rate of 10−7 to per year. A pair of bipolar jets is usually present as well. The accretion explains all peculiar properties of classical T Tauri stars: strong flux in the emission lines (up to 100% of the intrinsic luminosity of the star), magnetic activity, photometric variability and jets. The emission lines actually form as the accreted gas hits the "surface" of the star, which happens around its magnetic poles. The jets are byproducts of accretion: they carry away excessive angular momentum. The classical T Tauri stage lasts about 10 million years (there are only a few examples of so-called Peter Pan disks, where the accretion continues to persist for much longer periods, sometimes lasting for more than 40 million years). The disk eventually disappears due to accretion onto the central star, planet formation, ejection by jets, and photoevaporation by ultraviolet radiation from the central star and nearby stars. As a result, the young star becomes a weakly lined T Tauri star, which, over hundreds of millions of years, evolves into an ordinary Sun-like star, dependent on its initial mass.
Accretion of planets
Self-accretion of cosmic dust accelerates the growth of the particles into boulder-sized planetesimals. The more massive planetesimals accrete some smaller ones, while others shatter in collisions. Accretion disks are common around smaller stars, stellar remnants in a close binary, or black holes surrounded by material (such as those at the centers of galaxies). Some dynamics in the disk, such as dynamical friction, are necessary to allow orbiting gas to lose angular momentum and fall onto the central massive object. Occasionally, this can result in stellar surface fusion (see Bondi accretion).
In the formation of terrestrial planets or planetary cores, several stages can be considered. First, when gas and dust grains collide, they agglomerate by microphysical processes like van der Waals forces and electromagnetic forces, forming micrometer-sized particles. During this stage, accumulation mechanisms are largely non-gravitational in nature. However, planetesimal formation in the centimeter-to-meter range is not well understood, and no convincing explanation is offered as to why such grains would accumulate rather than simply rebound. In particular, it is still not clear how these objects grow to become sized planetesimals; this problem is known as the "meter size barrier": As dust particles grow by coagulation, they acquire increasingly large relative velocities with respect to other particles in their vicinity, as well as a systematic inward drift velocity, that leads to destructive collisions, and thereby limit the growth of the aggregates to some maximum size. Ward (1996) suggests that when slow moving grains collide, the very low, yet non-zero, gravity of colliding grains impedes their escape. It is also thought that grain fragmentation plays an important role replenishing small grains and keeping the disk thick, but also in maintaining a relatively high abundance of solids of all sizes.
A number of mechanisms have been proposed for crossing the 'meter-sized' barrier. Local concentrations of pebbles may form, which then gravitationally collapse into planetesimals the size of large asteroids. These concentrations can occur passively due to the structure of the gas disk, for example, between eddies, at pressure bumps, at the edge of a gap created by a giant planet, or at the boundaries of turbulent regions of the disk. Or, the particles may take an active role in their concentration via a feedback mechanism referred to as a streaming instability. In a streaming instability the interaction between the solids and the gas in the protoplanetary disk results in the growth of local concentrations, as new particles accumulate in the wake of small concentrations, causing them to grow into massive filaments. Alternatively, if the grains that form due to the agglomeration of dust are highly porous their growth may continue until they become large enough to collapse due to their own gravity. The low density of these objects allows them to remain strongly coupled with the gas, thereby avoiding high velocity collisions which could result in their erosion or fragmentation.
Grains eventually stick together to form mountain-size (or larger) bodies called planetesimals. Collisions and gravitational interactions between planetesimals combine to produce Moon-size planetary embryos (protoplanets) over roughly 0.1–1 million years. Finally, the planetary embryos collide to form planets over 10–100 million years. The planetesimals are massive enough that mutual gravitational interactions are significant enough to be taken into account when computing their evolution. Growth is aided by orbital decay of smaller bodies due to gas drag, which prevents them from being stranded between orbits of the embryos. Further collisions and accumulation lead to terrestrial planets or the core of giant planets.
If the planetesimals formed via the gravitational collapse of local concentrations of pebbles, their growth into planetary embryos and the cores of giant planets is dominated by the further accretions of pebbles. Pebble accretion is aided by the gas drag felt by objects as they accelerate toward a massive body. Gas drag slows the pebbles below the escape velocity of the massive body causing them to spiral toward and to be accreted by it. Pebble accretion may accelerate the formation of planets by a factor of 1000 compared to the accretion of planetesimals, allowing giant planets to form before the dissipation of the gas disk. However, core growth via pebble accretion appears incompatible with the final masses and compositions of Uranus and Neptune. Direct calculations indicate that, in a typical protoplanetary disk, the formation time of a giant planet via pebble accretion is comparable to the formation times resulting from planetesimal accretion.
The formation of terrestrial planets differs from that of giant gas planets, also called Jovian planets. The particles that make up the terrestrial planets are made from metal and rock that condensed in the inner Solar System. However, Jovian planets began as large, icy planetesimals, which then captured hydrogen and helium gas from the solar nebula. Differentiation between these two classes of planetesimals arise due to the frost line of the solar nebula.
Accretion of asteroids
Meteorites contain a record of accretion and impacts during all stages of asteroid origin and evolution; however, the mechanism of asteroid accretion and growth is not well understood. Evidence suggests the main growth of asteroids can result from gas-assisted accretion of chondrules, which are millimeter-sized spherules that form as molten (or partially molten) droplets in space before being accreted to their parent asteroids. In the inner Solar System, chondrules appear to have been crucial for initiating accretion. The tiny mass of asteroids may be partly due to inefficient chondrule formation beyond 2 AU, or less-efficient delivery of chondrules from near the protostar. Also, impacts controlled the formation and destruction of asteroids, and are thought to be a major factor in their geological evolution.
Chondrules, metal grains, and other components likely formed in the solar nebula. These accreted together to form parent asteroids. Some of these bodies subsequently melted, forming metallic cores and olivine-rich mantles; others were aqueously altered. After the asteroids had cooled, they were eroded by impacts for 4.5 billion years, or disrupted.
For accretion to occur, impact velocities must be less than about twice the escape velocity, which is about for a radius asteroid. Simple models for accretion in the asteroid belt generally assume micrometer-sized dust grains sticking together and settling to the midplane of the nebula to form a dense layer of dust, which, because of gravitational forces, was converted into a disk of kilometer-sized planetesimals. But, several arguments suggest that asteroids may not have accreted this way.
Accretion of comets
Comets, or their precursors, formed in the outer Solar System, possibly millions of years before planet formation. How and when comets formed is debated, with distinct implications for Solar System formation, dynamics, and geology. Three-dimensional computer simulations indicate the major structural features observed on cometary nuclei can be explained by pairwise low velocity accretion of weak cometesimals. The currently favored formation mechanism is that of the nebular hypothesis, which states that comets are probably a remnant of the original planetesimal "building blocks" from which the planets grew.
Astronomers think that comets originate in both the Oort cloud and the scattered disk. The scattered disk was created when Neptune migrated outward into the proto-Kuiper belt, which at the time was much closer to the Sun, and left in its wake a population of dynamically stable objects that could never be affected by its orbit (the Kuiper belt proper), and a population whose perihelia are close enough that Neptune can still disturb them as it travels around the Sun (the scattered disk). Because the scattered disk is dynamically active and the Kuiper belt relatively dynamically stable, the scattered disk is now seen as the most likely point of origin for periodic comets. The classic Oort cloud theory states that the Oort cloud, a sphere measuring about in radius, formed at the same time as the solar nebula and occasionally releases comets into the inner Solar System as a giant planet or star passes nearby and causes gravitational disruptions. Examples of such comet clouds may already have been seen in the Helix Nebula.
The Rosetta mission to comet 67P/Churyumov–Gerasimenko determined in 2015 that when Sun's heat penetrates the surface, it triggers evaporation (sublimation) of buried ice. While some of the resulting water vapour may escape from the nucleus, 80% of it recondenses in layers beneath the surface. This observation implies that the thin ice-rich layers exposed close to the surface may be a consequence of cometary activity and evolution, and that global layering does not necessarily occur early in the comet's formation history. While most scientists thought that all the evidence indicated that the structure of nuclei of comets is processed rubble piles of smaller ice planetesimals of a previous generation, the Rosetta mission confirmed the idea that comets are "rubble piles" of disparate material. Comets appear to have formed as ~100-km bodies, then overwhelmingly ground/recontacted into their present states.
| Physical sciences | Celestial mechanics | Astronomy |
1661334 | https://en.wikipedia.org/wiki/Homo%20rhodesiensis | Homo rhodesiensis | Homo rhodesiensis is the species name proposed by Arthur Smith Woodward (1921) to classify Kabwe 1 (the "Kabwe skull" or "Broken Hill skull", also "Rhodesian Man"), a Middle Stone Age fossil recovered from Broken Hill mine in Kabwe, Northern Rhodesia (now Zambia). In 2020, the skull was dated to 324,000 to 274,000 years ago. Other similar older specimens also exist.
H. rhodesiensis is now widely considered a synonym of H. heidelbergensis. Other designations such as Homo sapiens arcaicus and Homo sapiens rhodesiensis have also been proposed.
Fossils
A number of morphologically comparable fossil remains came to light in East Africa (Bodo, Ndutu, Eyasi, Ileret) and North Africa (Salé, Rabat, Dar-es-Soltane, Djbel Irhoud, Sidi Aberrahaman, Tighenif) during the 20th century.
Kabwe 1, also called the Broken Hill skull, or "Rhodesian Man", was assigned by Arthur Smith Woodward in 1921 as the type specimen for Homo rhodesiensis; most contemporary scientists forego the taxon "rhodesiensis" altogether and assign it to Homo heidelbergensis. The cranium was discovered in Broken Hill lead mine in Mutwe Wa Nsofu Area of Northern Rhodesia (now Kabwe, Zambia) on June 17, 1921 by two miners. In addition to the cranium, an upper jaw from another individual, a sacrum, a tibia, and two femur fragments were also found.
Bodo cranium: The 600,000 year old fossil was found in 1976 by members of an expedition led by Jon Kalb at Bodo D'ar in the Awash River valley of Ethiopia. Although the skull is most similar to those of Kabwe, Woodward's nomenclature was discontinued and its discoverers attributed it to H. heidelbergensis. It has features that represent a transition between Homo ergaster/erectus and Homo sapiens.
Ndutu cranium, "the hominid from Lake Ndutu" in northern Tanzania, around 600–500,000 years old or 400,000 years old. In 1976 R. J. Clarke classified it as Homo erectus and it has generally been viewed that way, although points of similarity to H. sapiens have also been recognized. After comparative studies with similar finds in Africa allocation to an African subspecies of H. sapiens was considered most appropriate by Phillip Rightmire. An indirect cranial capacity estimate suggests 1100 ml. Its supratoral sulcus morphology and the presence of protuberance as suggested by Rightmire "give the Nudutu occiput an appearance which is also unlike that of Homo erectus". And in a 1989 publication Clarke concluded: "It is assigned to archaic Homo sapiens on the basis of its expanded parietal and occipital regions of the brain". But Stinger (1986) pointed out that a thickened iliac pillar is typical for Homo erectus. In 2016, Chris Stringer classified the cranium as belonging to Homo heidelbergensis/Homo rhodesiensis (a species considered to be intermediate between Homo erectus and Homo sapiens) rather than as early H. sapiens, but considers it to display a "more sapiens-like zygomaxillary morphology" than certain other examples of Homo rhodesiensis.
The Saldanha cranium found in 1953 in South Africa, and estimated at around 500,000 years old, was subject to at least three taxonomic revisions from 1955 to 1996.
Bodo cranium
The Bodo cranium is a fossil of an extinct type of hominin species. It was found by members of an expedition led by Jon Kalb in 1976. The Rift Valley Research Mission conducted a number of surveys that led to the findings of Acheulean tools and animal fossils, as well as the Bodo Cranium. The initial discovery was by Alemayhew Asfaw and Charles Smart, who found a lower face. Two weeks later, Paul Whitehead and Craig Wood found the upper portion of the face. Pieces of the cranium were discovered along the surface of one of the dry branches of the Awash River in Ethiopia. The cranium, artifacts, and other animal fossils were found over a relatively large area of medium sand, and only a few of the tools were found near the cranium. The skull is 600,000 years old.
Observation
This specimen has an unusually large cranial capacity for its age that is estimated at around 1250 cc (in the range between ~1,200–1,325 cc)
within the (lower) range of modern Homo sapiens.
The cranium includes the face, much of the frontal bone, parts of the midvault and the base anterior to the foramen magnum. The cranial length, width and height are 21 cm (8.3 in), 15.87 cm (6.2 in) and 19.05 cm (7.5 in) respectively. Researchers have suggested that Bodo butchered animals because Acheulean hand axes and cleavers, along with animal bones, were found at the site. Cuts on the Bodo cranium show the earliest evidence of removal of flesh immediately after the death of an individual using a stone tool. The findings of symmetrical cut marks with specific patterns and directionality on the cranium serve as strong evidence that de-fleshing was done purposefully for mortuary practices and represents the earliest evidence of non-utilitarian mortuary practices. The cut marks were located "laterally among the maxilla" causing speculation among researchers that the specific reason for de-fleshing was to remove the mandible.
Morphology
The front of the Bodo cranium is very broad and supports large supraorbital structures. The supraorbital torus projects and is heavily constructed, especially in the central parts of the cranium. The Glabella is rounded and projects strongly. Like Homo erectus, the braincase is low and archaic in appearance. The vault bones are also thick like Homo erectus specimens. Due to the large cranial capacity, there is a wider midvault which includes signs of parietal bossing as well as a high contour of the temporal squama. The parietal length can’t be accurately determined because that section of the specimen is incomplete. Though the mastoid is missing, insights regarding the specimen can be determined using fragments from the individual collected at the scene in 1981. The cranium’s parietal walls expand relative to the bitemporal width in a way that is characteristic of modern humans. The squamosal suture has a high arch which is present in modern human craniums as well.
Evolutionary significance
The cranium has an unusual appearance, which has led to debates over its taxonomy. It displays both primitive and derived features, such as a cranial capacity more similar to modern humans and a projecting supraorbital torus more like Homo erectus. Bodo and other Mid-Pleistocene hominin fossils appear to represent a lineage between Homo erectus and anatomically modern humans, although its exact location in the human evolutionary tree is still uncertain. Due to the similarities to both Homo erectus and modern humans, it has been postulated that the Bodo cranium, as well as other members of Homo heidelbergensis were part of a group of hominins that evolved distinct from Homo erectus early in the Middle Pleistocene. Despite the similarities, there is still a question of where exactly Homo heidelbergensis evolved. The increased encephalization seen in fossils like the Bodo cranium is thought to have been a driving force in the speciation of anatomically modern humans.
Similarities between the Bodo cranium and Kabwe cranium
Both the Bodo cranium and the Kabwe cranium share a number of similarities. Both have cranial capacities similar to, but on the low end of the range of modern humans (1250cc vs 1230cc). Both craniums have a very large supraorbital torus. These two features together suggest that they are a link between Homo erectus and Homo sapiens. The morphology and the taxonomy are most similar to other specimens of type Homo heidelbergensis. Both the Bodo and Kabwe specimens can be described as archaic because they retain certain features in common with Homo erectus. However, both exhibit important differences from Homo erectus in their anatomy, such as the contour of their parietals, the shape of their temporal bones, the cranial base, and the morphology of their nose and palate. While there are many similarities, there are a few differences between the specimens, including the entire brow of the Bodo cranium, particularly the lateral segments, which are less thick than the Kabwe specimen.
"Homo bodoensis"
In 2021, Canadian anthropologist Mirjana Roksandic and colleagues recommended the complete dissolution of H. heidelbergensis and "H. rhodesiensis", as the name rhodesiensis honours English diamond magnate Cecil Rhodes who disenfranchised the black population in southern Africa. They classified all European H. heidelbergensis as H. neanderthalensis, and synonymised H. rhodesiensis with a new species they named "H. bodoensis" which includes all African specimens, and potentially some from the Levant and the Balkans which have no Neanderthal-derived traits (namely Ceprano, Mala Balanica, HaZore'a and Nadaouiyeh Aïn Askar). "H. bodoensis" is supposed to represent the immediate ancestor of modern humans, but does not include the LCA of modern humans and Neanderthals. They suggested the confusing morphology of the Middle Pleistocene was caused by periodic "H. bodoensis" migration events into Europe following population collapses after glacial cycles, interbreeding with surviving indigenous populations. Their taxonomic recommendations were rejected by Stringer and others as they failed to explain how exactly their proposals would resolve anything, in addition to violating nomenclatural rules.
| Biology and health sciences | Homo | Biology |
1661402 | https://en.wikipedia.org/wiki/Australopithecus%20anamensis | Australopithecus anamensis | Australopithecus anamensis is a hominin species that lived approximately between 4.3 and 3.8 million years ago and is the oldest known Australopithecus species, living during the Plio-Pleistocene era.
Nearly one hundred fossil specimens of A. anamensis are known from Kenya and Ethiopia, representing over twenty individuals. The first fossils of A. anamensis discovered, are dated to around 3.8 and 4.2 million years ago and were found in Kanapoi and Allia Bay in Northern Kenya.
It is usually accepted that A. afarensis emerged within this lineage. However, A. anamensis and A. afarensis appear to have lived side by side for at least some period of time, and it is not fully settled whether the lineage that led to extant humans emerged in A. afarensis, or directly in A. anamensis.
Fossil evidence determines that Australopithecus anamensis is the earliest hominin species in the Turkana Basin, but likely co-existed with afarensis towards the end of its existence. A. anamensis and A. afarensis may be treated as a single grouping.
Preliminary analysis of the sole upper cranial fossil indicates A. anamensis had a smaller cranial capacity (estimated 365-370 c.c.) than A. afarensis.
Discovery
The first fossilized specimen of the species, although not recognized as such at the time, was a single fragment of humerus (arm bone) found in Pliocene strata in the Kanapoi region of West Lake Turkana by a Harvard University research team in 1965. Bryan Patterson and William W. Howells's initial paper on the bone was published in Science in 1967; their initial analysis suggested an Australopithecus specimen and an age of 2.5 million years. Patterson and colleagues subsequently revised their estimation of the specimen's age to 4.0–4.5 mya based on faunal correlation data.
In 1994, the London-born Kenyan paleoanthropologist Meave Leakey and archaeologist Alan Walker excavated the Allia Bay site and uncovered several additional fragments of the hominid, including one complete lower jaw bone which closely resembles that of a common chimpanzee (Pan troglodytes) but whose teeth bear a greater resemblance to those of a human. Based on the limited postcranial evidence available, A. anamensis appears to have been habitually bipedal, although it retained some primitive features of its upper limbs.
In 1995, Meave Leakey and her associates, taking note of differences between Australopithecus afarensis and the new finds, assigned them to a new species, A. anamensis, deriving its name from the Turkana word anam, meaning "lake".
Although the excavation team did not find hips, feet or legs, Meave Leakey believes that Australopithecus anamensis often climbed trees. Tree climbing was one behavior retained by early hominins until the appearance of the first Homo species about 2.5 million years ago. A. anamensis shares many traits with Australopithecus afarensis and may well be its direct predecessor. Fossil records for A. anamensis have been dated to between 4.2 and 3.9 million years ago, with findings in the 2000s from stratigraphic sequences dating to about 4.1–4.2 million years ago. Specimens have been found between two layers of volcanic ash, dated to 4.17 and 4.12 million years, coincidentally when A. afarensis appears in the fossil record.
The fossils (twenty one in total) include upper and lower jaws, cranial fragments, and the upper and lower parts of a leg bone (tibia). In addition to this, the aforementioned fragment of humerus found in 1965 at the same site at Kanapoi has now been assigned to this species.
In 2006, a new A. anamensis find was officially announced, extending the range of A. anamensis into northeast Ethiopia. Specifically, one site known as Asa Issie provided 30 A. anamensis fossils. These new fossils, sampled from a woodland context, include the largest hominid canine tooth yet recovered and the earliest Australopithecus femur. The find was in an area known as Middle Awash, home to several other more modern Australopithecus finds and only six miles (9.7 kilometers) away from the discovery site of Ardipithecus ramidus, the most modern species of Ardipithecus yet discovered. Ardipithecus was a more primitive hominid, considered the next known step below Australopithecus on the evolutionary tree. The A. anamensis find is dated to about 4.2 million years ago, the Ar. ramidus find to 4.4 million years ago, placing only 200,000 years between the two species and filling in yet another blank in the pre-Australopithecus hominid evolutionary timeline.
In 2010 journal articles were published by Yohannes Haile-Selassie and others describing the discovery of around 90 fossil specimens in the time period 3.6 to 3.8 million years ago (mya), in the Afar area of Ethiopia, filling in the time gap between A. anamensis and Australopithecus afarensis and showing a number of features of both. This supported the idea (proposed for instance by Kimbel et al. in 2006) that A. anamensis and A. afarensis were in fact one evolving species (i.e. a chronospecies resulting from anagenesis), but in August 2019, scientists from the same Haile-Selassie team announced the discovery of a nearly intact skull for the first time, and dated to 3.8 mya, of A. anamensis in Ethiopia. This discovery also indicated that an earlier forehead bone fossil from 3.9 mya was A. afarensis and therefore the two species over-lapped and could not be a chronospecies (noting that this does not prevent A. afarensis being descended from A. anamensis, but would be descended from only part of the A. anamensis population). The skull itself was found by Afar herder Ali Bereino in 2016. Other scientists (e.g. Alemseged, Kimbel, Ward, White) cautioned that one forehead bone fossil, which they viewed as not conclusively A. afarensis, should not be taken as disproving the possibility of anagenesis yet.
In August 2019, scientists announced the discovery of MRD-VP-1/1, a nearly intact skull, for the first time, and dated to 3.8 million years ago, of A. anamensis in Ethiopia. The skull itself was found by Afar herder Ali Bereino in 2016. This skull is important in supplementing the evolutionary lineage of hominins. The skull has a unique combination of derived and ancestral characteristics. It was determined that the cranium is older than A. afarensis through analyzing that the cranial capacity is much smaller and the face is very prognathic, both of which indicate that it is earlier than A. afarensis. Known as the MRD cranium, it is that of a male who was at an "advanced developmental age" determined by the worn down post-canine teeth. The teeth show mesiodistal elongation, which differs from A. afarensis. Similar to other australopiths, however, it has a narrow upper face with no forehead and a large mid-face with broad zygomatic bones. Before this new discovery, it was widely believed that Australopithecus anamensis and Australopithecus afarensis evolved one right after the other in a single lineage. However, with the discovery of MRD, it suggests that A. afarensis did not result from anagenesis, but that the two hominin species lived side by side for at least 100,000 years.
Environment
Australopithecus anamensis was found in Kenya, specifically at Allia Bay, East Turkana. Through analysis of stable isotope data, it is believed that their environment had more closed woodland canopies surrounding Lake Turkana than are present today. The greatest density of woodlands at Allia Bay was along the ancestral Omo River. There was believed to be more open savanna in the basin margins or uplands. Similarly at Allia Bay, it is suggested that the environment was much wetter. While it is not definitive, it also could have been possible that nut or seed-bearing trees could have been present at Allia Bay, however more research is needed.
Diet
Studies of the microwear on Australopithecus anamensis molar fossils show a pattern of long striations. This pattern is similar to the microwear on the molars of gorillas; suggesting that Australopithecus anamensis had a similar diet to that of the modern gorilla.
The microwear patterns are consistent on all Australopithecus anamensis molar fossils regardless of location or time. This shows that their diet largely remained the same no matter what their environment.
The earliest dietary isotope evidence in Turkana Basin hominin species comes from the Australopithecus anamensis. This evidence suggests that their diet consisted primarily of C3 resources, possibly however with a small amount of C4 derived resources. Within the next 1.99- to 1.67-Ma time period, at least two distinctive hominin taxa shifted to a higher level of C4 resource consumption. At this point, there is no known cause for this shift in diet. One should recognize that this research does not by itself indicate a plant-based diet, because the isotopes can be ingested by eating animals and insects that fed on C3 and C4 resources.
A. anamensis had thick, long, and narrow jaws with their side teeth arranged in parallel lines. The palate, rows of teeth, and other characteristics of A. anamensis dentition suggests that they were omnivores and their diets were composed heavily on fruit, similar to chimpanzees. These characteristics came from Ar. ramidus, who were thought to have preceded A. anamensis. Evidence of a dietary shift was also found, suggesting the consumption of harder foods. This was indicated by thicker enamel in teeth and more intense molar crowns.
Relation to other hominin species
Australopithecus anamensis is the intermediate species between Ardipithecus ramidus and Australopithecus afarensis and has multiple shared traits with humans and other apes. Fossil studies of the wrist morphology of A. anamensis have suggested knuckle-walking, which is a derived trait shared with other African apes. The A. anamensis hand portrays robust phalanges and metacarpals, and long middle phalanges. These characteristics show that the A. anamensis likely engaged in arboreal living but were largely bipedal, although not in an identical way to Homo.
All Australopithecus were bipedal, small-brained, and had large teeth. A. anamensis is often confused with Australopithecus afarensis due to their similar bone structure and their habitation of woodland areas. These similarities include thick tooth enamel, which is a shared derived trait of all Australopithecus and shared with most Miocene hominoids. Tooth size variability in A. anamensis suggests that there was significant body size variation. In relation to their diet, A. anamensis has similarities with their predecessor Ardipithecus ramidus. A. anamensis sometimes had much larger canines than later Australopithecus species. A. anamensis and A. afarensis have similarities in the humerus and the tibia. They both have human-like features and matching sizes. It has also been found that the bodies of A. anamensis are somewhat larger than those of A. afarensis. Based on additional afarensis collections from the Hadar, Ethiopia site, the A. anamensis radius is similar to that of afarensis in the lunate and scaphoid surfaces. Additional findings suggest that A. anamensis have long arms compared to modern humans.
Physical characteristics
Based on fossil evidence, A. anamensis expresses high degrees of sexual dimorphism. Although considered to be the more primitive of the australopiths, A. anamensis had parts of the knee, tibia, and elbow that were different from apes, which indicates bipedalism as the species' form of locomotion. Specifically, the tibia bone of A. anamensis has a more expansive upper end with bone.
In addition to the modified body parts that indicate bipedalism, A. anamensis fossils show evidence of tree climbing. Archeology finds indicate that A. anamensis had long forearms, as well as modified features of the wrist bone. Both the forearms and finger bones of A. anamensis indicate a potential of utilizing the upper limbs as support when operating in trees or on the ground. Forearm bones belonging to A. anamensis have been found to be 265 millimeters to 277 millimeters in length. The curved proximal hand phalanx of A. anamensis in the fossil record that contains strong ridges is indicative of its potential ability to climb.
Fossil evidence reveals that A. anamensis had a somewhat wide jaw joint that was flat from front to back, which resembles a curvature similar to those seen in great apes. Furthermore, the ear canal of A. anamensis fossils are narrow in diameter. The ear canal most resembles that of chimpanzees and is contrasting to the wide ear canals of both later Australopithecus and Homo.
The first lower premolar of A. anamensis is characterized by a singular large cusp. Additionally, A. anamensis has a narrow first milk molar that contains a large dominant cusp with minimum surface area, which may have been used for crushing.
| Biology and health sciences | Australopithecines | Biology |
1664060 | https://en.wikipedia.org/wiki/Adaptive%20immune%20system | Adaptive immune system | The adaptive immune system, AIS, also known as the acquired immune system, or specific immune system is a subsystem of the immune system that is composed of specialized cells, organs, and processes that eliminate pathogens specifically. The acquired immune system is one of the two main immunity strategies found in vertebrates (the other being the innate immune system).
Like the innate system, the adaptive immune system includes both humoral immunity components and cell-mediated immunity components and destroys invading pathogens. Unlike the innate immune system, which is pre-programmed to react to common broad categories of pathogen, the adaptive immune system is highly specific to each particular pathogen the body has encountered.
Adaptive immunity creates immunological memory after an initial response to a specific pathogen, and leads to an enhanced response to future encounters with that pathogen. Antibodies are a critical part of the adaptive immune system. Adaptive immunity can provide long-lasting protection, sometimes for the person's entire lifetime. For example, someone who recovers from measles is now protected against measles for their lifetime; in other cases it does not provide lifetime protection, as with chickenpox. This process of adaptive immunity is the basis of vaccination.
The cells that carry out the adaptive immune response are white blood cells known as lymphocytes. B cells and T cells, two different types of lymphocytes, carry out the main activities: antibody responses, and cell-mediated immune response. In antibody responses, B cells are activated to secrete antibodies, which are proteins also known as immunoglobulins. Antibodies travel through the bloodstream and bind to the foreign antigen causing it to inactivate, which does not allow the antigen to bind to the host. Antigens are any substances that elicit the adaptive immune response. Sometimes the adaptive system is unable to distinguish harmful from harmless foreign molecules; the effects of this may be hayfever, asthma, or any other allergy.
In adaptive immunity, pathogen-specific receptors are "acquired" during the lifetime of the organism (whereas in innate immunity pathogen-specific receptors are already encoded in the genome). This acquired response is called "adaptive" because it prepares the body's immune system for future challenges (though it can actually also be maladaptive when it results in allergies or autoimmunity).
The system is highly adaptable because of two factors. First, somatic hypermutation is a process of accelerated random genetic mutations in the antibody-coding genes, which allows antibodies with novel specificity to be created. Second, V(D)J recombination randomly selects one variable (V), one diversity (D),
and one joining (J) region for genetic recombination and discards the rest, which produces a highly unique combination of antigen-receptor gene segments in each lymphocyte. This mechanism allows a small number of genetic segments to generate a vast number of different antigen receptors, which are then uniquely expressed on each individual lymphocyte. Since the gene rearrangement leads to an irreversible change in the DNA of each cell, all progeny (offspring) of that cell inherit genes that encode the same receptor specificity, including the memory B cells and memory T cells that are the keys to long-lived specific immunity.
Naming
The term "adaptive" was first used by Robert Good in reference to antibody responses in frogs as a synonym for "acquired immune response" in 1964. Good acknowledged he used the terms as synonyms but explained only that he preferred to use the term "adaptive". He might have been thinking of the then not implausible theory of antibody formation in which antibodies were plastic and could adapt themselves to the molecular shape of antigens, and/or to the concept of "adaptive enzymes" as described by Monod in bacteria, that is, enzymes whose expression could be induced by their substrates. The phrase was used almost exclusively by Good and his students and a few other immunologists working with marginal organisms until the 1990s when it became widely used in tandem with the term "innate immunity" which became a popular subject after the discovery of the Toll receptor system in Drosophila, a previously marginal organism for the study of immunology. The term "adaptive" as used in immunology is problematic as acquired immune responses can be both adaptive and maladaptive in the physiological sense. Indeed, both acquired and innate immune responses can be both adaptive and maladaptive in the evolutionary sense. Most textbooks today, following the early use by Janeway, use "adaptive" almost exclusively and noting in glossaries that the term is synonymous with "acquired".
The classic sense of "acquired immunity" came to mean, since Tonegawa's discovery, "antigen-specific immunity mediated by somatic gene rearrangements that create clone-defining antigen receptors". In the last decade, the term "adaptive" has been increasingly applied to another class of immune response not so-far associated with somatic gene rearrangements. These include expansion of natural killer (NK) cells with so-far unexplained specificity for antigens, expansion of NK cells expressing germ-line encoded receptors, and activation of other innate immune cells to an activated state that confers a short-term "immune memory". In this sense, "adaptive immunity" more closely resembles the concept of "activated state" or "heterostasis", thus returning in sense to the physiological sense of "adaptation" to environmental changes.
Functions
Acquired immunity is triggered in vertebrates when a pathogen evades the innate immune system and (1) generates a threshold level of antigen and (2) generates "stranger" or "danger" signals activating dendritic cells.
The major functions of the acquired immune system include:
Recognition of specific "non-self" antigens in the presence of "self", during the process of antigen presentation.
Generation of responses that are tailored to maximally eliminate specific pathogens or pathogen-infected cells.
Development of immunological memory, in which pathogens are "remembered" through memory B cells and memory T cells.
In humans, it takes 4–7 days for the adaptive immune system to mount a significant response.
Lymphocytes
T and B lymphocytes are the cells of the adaptive immune system. The human body has about 2 trillion lymphocytes, which are 20–40% of white blood cells; their total mass is about the same as the brain or liver. The peripheral bloodstream contains only 2% of all circulating lymphocytes; the other 98% move within tissues and the lymphatic system, which includes the lymph nodes and spleen. In humans, approximately 1–2% of the lymphocyte pool recirculates each hour to increase the opportunity for the cells to encounter the specific pathogen and antigen that they react to.
B cells and T cells are derived from the same multipotent hematopoietic stem cells, and look identical to one another until after they are activated. B cells play a large role in the humoral immune response, whereas T cells are intimately involved in cell-mediated immune responses. In all vertebrates except Agnatha, B cells and T cells are produced by stem cells in the bone marrow. T cell progenitors then migrate from the bone marrow to the thymus, where they develop further.
In an adult animal, the peripheral lymphoid organs contain a mixture of B and T cells in at least three stages of differentiation:
Naive B and naive T cells, which have left the bone marrow or thymus and entered the lymphatic system, but have yet to encounter their matching antigen
Effector cells that have been activated by their matching antigen, and are actively involved in eliminating a pathogen
Memory cells, the survivors of past infections
Antigen presentation
Acquired immunity relies on the capacity of immune cells to distinguish between the body's own cells and unwanted invaders.
The host's cells express "self" antigens. These antigens are different from those on the surface of bacteria or on the surface of virus-infected host cells ("non-self" or "foreign" antigens). The acquired immune response is triggered by recognizing foreign antigen in the cellular context of an activated dendritic cell.
With the exception of non-nucleated cells (including erythrocytes), all cells are capable of presenting antigen through the function of major histocompatibility complex (MHC) molecules. Some cells are specially equipped to present antigen, and to prime naive T cells. Dendritic cells, B-cells, and macrophages are equipped with special "co-stimulatory" ligands recognized by co-stimulatory receptors on T cells, and are termed professional antigen-presenting cells (APCs).
Several T cells subgroups can be activated by professional APCs, and each type of T cell is specially equipped to deal with each unique toxin or microbial pathogen. The type of T cell activated, and the type of response generated, depends, in part, on the context in which the APC first encountered the antigen.
Exogenous antigens
Dendritic cells engulf exogenous pathogens, such as bacteria, parasites or toxins in the tissues and then migrate, via chemotactic signals, to the T cell-enriched lymph nodes. During migration, dendritic cells undergo a process of maturation in which they lose most of their ability to engulf other pathogens, and develop an ability to communicate with T-cells. The dendritic cell uses enzymes to chop the pathogen into smaller pieces, called antigens. In the lymph node, the dendritic cell displays these non-self antigens on its surface by coupling them to a receptor called the major histocompatibility complex, or MHC (also known in humans as human leukocyte antigen (HLA)). This MHC-antigen complex is recognized by T-cells passing through the lymph node. Exogenous antigens are usually displayed on MHC class II molecules, which activate CD4+T helper cells.
Endogenous antigens
Endogenous antigens are produced by intracellular bacteria and viruses replicating within a host cell. The host cell uses enzymes to digest virally associated proteins and displays these pieces on its surface to T-cells by coupling them to MHC. Endogenous antigens are typically displayed on MHC class I molecules, and activate CD8+ cytotoxic T-cells. With the exception of non-nucleated cells (including erythrocytes), MHC class I is expressed by all host cells.
T lymphocytes
CD8+ T lymphocytes and cytotoxicity
Cytotoxic T cells (also known as TC, killer T cell, or cytotoxic T-lymphocyte (CTL)) are a sub-group of T cells that induce the death of cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional.
Naive cytotoxic T cells are activated when their T-cell receptor (TCR) strongly interacts with a peptide-bound MHC class I molecule. This affinity depends on the type and orientation of the antigen/MHC complex, and is what keeps the CTL and infected cell bound together. Once activated, the CTL undergoes a process called clonal selection, in which it gains functions and divides rapidly to produce an army of "armed" effector cells. Activated CTL then travels throughout the body searching for cells that bear that unique MHC Class I + peptide.
When exposed to these infected or dysfunctional somatic cells, effector CTL release perforin and granulysin: cytotoxins that form pores in the target cell's plasma membrane, allowing ions and water to flow into the infected cell, and causing it to burst or lyse. CTL release granzyme, a serine protease encapsulated in a granule that enters cells via pores to induce apoptosis (cell death). To limit extensive tissue damage during an infection, CTL activation is tightly controlled and in general requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T-cells (see below).
On resolution of the infection, most effector cells die and phagocytes clear them away—but a few of these cells remain as memory cells. On a later encounter with the same antigen, these memory cells quickly differentiate into effector cells, dramatically shortening the time required to mount an effective response.
Helper T-cells
CD4+ lymphocytes, also called "helper" T cells, are immune response mediators, and play an important role in establishing and maximizing the capabilities of the acquired immune response. These cells have no cytotoxic or phagocytic activity; and cannot kill infected cells or clear pathogens, but, in essence "manage" the immune response, by directing other cells to perform these tasks.
Helper T cells express T cell receptors (TCR) that recognize antigen bound to Class II MHC molecules. The activation of a naive helper T-cell causes it to release cytokines, which influences the activity of many cell types, including the APC (Antigen-Presenting Cell) that activated it. Helper T-cells require a much milder activation stimulus than cytotoxic T cells. Helper T cells can provide extra signals that "help" activate cytotoxic cells.
Th1 and Th2: helper T cell responses
Classically, two types of effector CD4+ T helper cell responses can be induced by a professional APC, designated Th1 and Th2, each designed to eliminate different types of pathogens. The factors that dictate whether an infection triggers a Th1 or Th2 type response are not fully understood, but the response generated does play an important role in the clearance of different pathogens.
The Th1 response is characterized by the production of Interferon-gamma, which activates the bactericidal activities of macrophages, and induces B cells to make opsonizing (marking for phagocytosis) and complement-fixing antibodies, and leads to cell-mediated immunity. In general, Th1 responses are more effective against intracellular pathogens (viruses and bacteria that are inside host cells).
The Th2 response is characterized by the release of Interleukin 5, which induces eosinophils in the clearance of parasites. Th2 also produce Interleukin 4, which facilitates B cell isotype switching. In general, Th2 responses are more effective against extracellular bacteria, parasites including helminths and toxins. Like cytotoxic T cells, most of the CD4+ helper cells die on resolution of infection, with a few remaining as CD4+ memory cells.
Increasingly, there is strong evidence from mouse and human-based scientific studies of a broader diversity in CD4+ effector T helper cell subsets. Regulatory T (Treg) cells, have been identified as important negative regulators of adaptive immunity as they limit and suppress the immune system to control aberrant immune responses to self-antigens; an important mechanism in controlling the development of autoimmune diseases. Follicular helper T (Tfh) cells are another distinct population of effector CD4+ T cells that develop from naive T cells post-antigen activation. Tfh cells are specialized in helping B cell humoral immunity as they are uniquely capable of migrating to follicular B cells in secondary lymphoid organs and provide them positive paracrine signals to enable the generation and recall production of high-quality affinity-matured antibodies. Similar to Tregs, Tfh cells also play a role in immunological tolerance as an abnormal expansion of Tfh cell numbers can lead to unrestricted autoreactive antibody production causing severe systemic autoimmune disorders.
The relevance of CD4+ T helper cells is highlighted during an HIV infection. HIV is able to subvert the immune system by specifically attacking the CD4+ T cells, precisely the cells that could drive the clearance of the virus, but also the cells that drive immunity against all other pathogens encountered during an organism's lifetime.
Gamma delta T cells
Gamma delta T cells (γδ T cells) possess an alternative T cell receptor (TCR) as opposed to CD4+ and CD8+ αβ T cells and share characteristics of helper T cells, cytotoxic T cells and natural killer cells. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells exhibit characteristics that place them at the border between innate and acquired immunity. On one hand, γδ T cells may be considered a component of adaptive immunity in that they rearrange TCR genes via V(D)J recombination, which also produces junctional diversity, and develop a memory phenotype. On the other hand, however, the various subsets may also be considered part of the innate immune system where a restricted TCR or NK receptors may be used as a pattern recognition receptor. For example, according to this paradigm, large numbers of Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted intraepithelial Vδ1 T cells respond to stressed epithelial cells.
B lymphocytes and antibody production
B Cells are the major cells involved in the creation of antibodies that circulate in blood plasma and lymph, known as humoral immunity. Antibodies (also known as immunoglobulin, Ig), are large Y-shaped proteins used by the immune system to identify and neutralize foreign objects. In mammals, there are five types of antibody: IgA, IgD, IgE, IgG, and IgM, differing in biological properties; each has evolved to handle different kinds of antigens. Upon activation, B cells produce antibodies, each of which recognize a unique antigen, and neutralizing specific pathogens.
Antigen and antibody binding would cause five different protective mechanisms:
Agglutination: Reduces number of infectious units to be dealt with
Activation of complement: Cause inflammation and cell lysis
Opsonization: Coating antigen with antibody enhances phagocytosis
Antibody-dependent cell-mediated cytotoxicity: Antibodies attached to target cell cause destruction by macrophages, eosinophils, and NK cells
Neutralization: Blocks adhesion of bacteria and viruses to mucosa
Like the T cell, B cells express a unique B cell receptor (BCR), in this case, a membrane-bound antibody molecule. All the BCR of any one clone of B cells recognizes and binds to only one particular antigen. A critical difference between B cells and T cells is how each cell "sees" an antigen. T cells recognize their cognate antigen in a processed form – as a peptide in the context of an MHC molecule, whereas B cells recognize antigens in their native form. Once a B cell encounters its cognate (or specific) antigen (and receives additional signals from a helper T cell (predominately Th2 type)), it further differentiates into an effector cell, known as a plasma cell.
Plasma cells are short-lived cells (2–3 days) that secrete antibodies. These antibodies bind to antigens, making them easier targets for phagocytes, and trigger the complement cascade. About 10% of plasma cells survive to become long-lived antigen-specific memory B cells. Already primed to produce specific antibodies, these cells can be called upon to respond quickly if the same pathogen re-infects the host, while the host experiences few, if any, symptoms.
Alternative systems
In jawless vertebrates
Primitive jawless vertebrates, such as the lamprey and hagfish, have an adaptive immune system that shows 3 different cell lineages, each sharing a common origin with B cells, αβ T cells, and innate-like γΔ T cells. Instead of the classical antibodies and T cell receptors, these animals possess a large array of molecules called variable lymphocyte receptors (VLRs for short) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity.
In insects
For a long time it was thought that insects and other invertebrates possess only innate immune system. However, in recent years some of the basic hallmarks of adaptive immunity have been discovered in insects. Those traits are immune memory and specificity. Although the hallmarks are present the mechanisms are different from those in vertebrates.
Immune memory in insects was discovered through the phenomenon of priming. When insects are exposed to non-lethal dose or heat killed bacteria they are able to develop a memory of that infection that allows them to withstand otherwise lethal dose of the same bacteria they were exposed to before. Unlike in vertebrates, insects do not possess cells specific for adaptive immunity. Instead those mechanisms are mediated by hemocytes. Hemocytes function similarly to phagocytes and after priming they are able to more effectively recognize and engulf the pathogen. It was also shown that it is possible to transfer the memory into offspring. For example, in honeybees if the queen is infected with bacteria then the newly born workers have enhanced abilities in fighting with the same bacteria. Other experimental model based on red flour beetle also showed pathogen specific primed memory transfer into offspring from both mothers and fathers.
Most commonly accepted theory of the specificity is based on Dscam gene. Dscam gene also known as Down syndrome cell adhesive molecule is a gene that contains 3 variable Ig domains. Those domains can be alternatively spliced reaching high numbers of variations. It was shown that after exposure to different pathogens there are different splice forms of dscam produced. After the animals with different splice forms are exposed to the same pathogen only the individuals with the splice form specific for that pathogen survive.
Other mechanisms supporting the specificity of insect immunity is RNA interference (RNAi). RNAi is a form of antiviral immunity with high specificity. It has several different pathways that all end with the virus being unable to replicate. One of the pathways is siRNA in which long double stranded RNA is cut into pieces that serve as templates for protein complex Ago2-RISC that finds and degrades complementary RNA of the virus. MiRNA pathway in cytoplasm binds to Ago1-RISC complex and functions as a template for viral RNA degradation. Last one is piRNA where small RNA binds to the Piwi protein family and controls transposones and other mobile elements. Despite the research the exact mechanisms responsible for immune priming and specificity in insects are not well described.
In bacteria
CRISPR is a term in DNA research. It stands for clustered regularly-interspaced short palindromic repeats. These are part of the genetic code in prokaryotes: most bacteria and archaea have it. It is their defence against attack by viruses. Its structure and function was discovered in the 21st century.
CRISPR has a lot of short repeated sequences. These sequences are part of an adaptive immune system for prokaryotes. It allows them to remember and counter the bacteriophages which prey on them. They work as a kind of acquired immune system for bacteria.
Immunological memory
When B cells and T cells are activated some become memory B cells and some memory T cells. Throughout the lifetime of an animal these memory cells form a database of effective B and T lymphocytes. Upon interaction with a previously encountered antigen, the appropriate memory cells are selected and activated. In this manner, the second and subsequent exposures to an antigen produce a stronger and faster immune response. This is "adaptive" in the sense that the body's immune system prepares itself for future challenges, but is "maladaptive" of course if the receptors are autoimmune. Immunological memory can be in the form of either passive short-term memory or active long-term memory.
Passive memory
Passive memory is usually short-term, lasting between a few days and several months. Newborn infants have had no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. In utero, maternal IgG is transported directly across the placenta, so that, at birth, human babies have high levels of antibodies, with the same range of antigen specificities as their mother. Breast milk contains antibodies (mainly IgA) that are transferred to the gut of the infant, protecting against bacterial infections, until the newborn can synthesize its own antibodies.
This is passive immunity because the fetus does not actually make any memory cells or antibodies: It only borrows them. Short-term passive immunity can also be transferred artificially from one individual to another via antibody-rich serum.
Active memory
In general, active immunity is long-term and can be acquired by infection followed by B cell and T cell activation, or artificially acquired by vaccines, in a process called immunization.
Immunization
Historically, infectious disease has been the leading cause of death in the human population. Over the last century, two important factors have been developed to combat their spread: sanitation and immunization. Immunization (commonly referred to as vaccination) is the deliberate induction of an immune response, and represents the single most effective manipulation of the immune system that scientists have developed. Immunizations are successful because they utilize the immune system's natural specificity as well as its inducibility.
The principle behind immunization is to introduce an antigen, derived from a disease-causing organism, that stimulates the immune system to develop protective immunity against that organism, but that does not itself cause the pathogenic effects of that organism. An antigen (short for antibody generator), is defined as any substance that binds to a specific antibody and elicits an adaptive immune response.
Most viral vaccines are based on live attenuated viruses, whereas many bacterial vaccines are based on acellular components of microorganisms, including harmless toxin components. Many antigens derived from acellular vaccines do not strongly induce an adaptive response, and most bacterial vaccines require the addition of adjuvants that activate the antigen-presenting cells of the innate immune system to enhance immunogenicity.
Immunological diversity
Most large molecules, including virtually all proteins and many polysaccharides, can serve as antigens. The parts of an antigen that interact with an antibody molecule or a lymphocyte receptor, are called epitopes, or antigenic determinants. Most antigens contain a variety of epitopes and can stimulate the production of antibodies, specific T cell responses, or both. A very small proportion (less than 0.01%) of the total lymphocytes are able to bind to a particular antigen, which suggests that only a few cells respond to each antigen.
For the acquired response to "remember" and eliminate a large number of pathogens the immune system must be able to distinguish between many different antigens, and the receptors that recognize antigens must be produced in a huge variety of configurations, in essence one receptor (at least) for each different pathogen that might ever be encountered. Even in the absence of antigen stimulation, a human can produce more than 1 trillion different antibody molecules. Millions of genes would be required to store the genetic information that produces these receptors, but, the entire human genome contains fewer than 25,000 genes.
Myriad receptors are produced through a process known as clonal selection. According to the clonal selection theory, at birth, an animal randomly generates a vast diversity of lymphocytes (each bearing a unique antigen receptor) from information encoded in a small family of genes. To generate each unique antigen receptor, these genes have undergone a process called V(D)J recombination, or combinatorial diversification, in which one gene segment recombines with other gene segments to form a single unique gene. This assembly process generates the enormous diversity of receptors and antibodies, before the body ever encounters antigens, and enables the immune system to respond to an almost unlimited diversity of antigens. Throughout an animal's lifetime, lymphocytes that can react against the antigens an animal actually encounters are selected for action—directed against anything that expresses that antigen.
The innate and acquired portions of the immune system work together, not in spite of each other. The acquired arm, B, and T cells could not function without the innate system input. T cells are useless without antigen-presenting cells to activate them, and B cells are disabled without T cell help. On the other hand, the innate system would likely be overrun with pathogens without the specialized action of the adaptive immune response.
Acquired immunity during pregnancy
The cornerstone of the immune system is the recognition of "self" versus "non-self". Therefore, the mechanisms that protect the human fetus (which is considered "non-self") from attack by the immune system, are particularly interesting. Although no comprehensive explanation has emerged to explain this mysterious, and often repeated, lack of rejection, two classical reasons may explain how the fetus is tolerated. The first is that the fetus occupies a portion of the body protected by a non-immunological barrier, the uterus, which the immune system does not routinely patrol. The second is that the fetus itself may promote local immunosuppression in the mother, perhaps by a process of active nutrient depletion. A more modern explanation for this induction of tolerance is that specific glycoproteins expressed in the uterus during pregnancy suppress the uterine immune response (see eu-FEDS).
During pregnancy in viviparous mammals (all mammals except Monotremes), endogenous retroviruses (ERVs) are activated and produced in high quantities during the implantation of the embryo. They are currently known to possess immunosuppressive properties, suggesting a role in protecting the embryo from its mother's immune system. Also, viral fusion proteins cause the formation of the placental syncytium to limit exchange of migratory cells between the developing embryo and the body of the mother (something an epithelium cannot do sufficiently, as certain blood cells specialize to insert themselves between adjacent epithelial cells). The immunodepressive action was the initial normal behavior of the virus, similar to HIV. The fusion proteins were a way to spread the infection to other cells by simply merging them with the infected one (HIV does this too). It is believed that the ancestors of modern viviparous mammals evolved after an infection by this virus, enabling the fetus to survive the immune system of the mother.
The human genome project found several thousand ERVs classified into 24 families.
Immune network theory
A theoretical framework explaining the workings of the acquired immune system is provided by immune network theory, based on interactions between idiotypes (unique molecular features of one clonotype, i.e. the unique set of antigenic determinants of the variable portion of an antibody) and 'anti-idiotypes' (antigen receptors that react with the idiotype as if it were a foreign antigen). This theory, which builds on the existing clonal selection hypothesis and since 1974 has been developed mainly by Niels Jerne and Geoffrey W. Hoffmann, is seen as being relevant to the understanding of the HIV pathogenesis and the search for an HIV vaccine.
Stimulation of adaptive immunity
One of the most interesting developments in biomedical science during the past few decades has been elucidation of mechanisms mediating innate immunity. One set of innate immune mechanisms is humoral, such as complement activation. Another set comprises pattern recognition receptors such as toll-like receptors, which induce the production of interferons and other cytokines increasing resistance of cells such as monocytes to infections. Cytokines produced during innate immune responses are among the activators of adaptive immune responses. Antibodies exert additive or synergistic effects with mechanisms of innate immunity. Unstable HbS clusters Band-3, a major integral red cell protein; antibodies recognize these clusters and accelerate their removal by phagocytic cells. Clustered Band 3 proteins with attached antibodies activate complement, and complement C3 fragments are opsonins recognized by the CR1 complement receptor on phagocytic cells.
A population study has shown that the protective effect of the sickle-cell trait against falciparum malaria involves the augmentation of acquired as well as innate immune responses to the malaria parasite, illustrating the expected transition from innate to acquired immunity.
Repeated malaria infections strengthen acquired immunity and broaden its effects against parasites expressing different surface antigens. By school age most children have developed efficacious adaptive immunity against malaria. These observations raise questions about mechanisms that favor the survival of most children in Africa while allowing some to develop potentially lethal infections.
In malaria, as in other infections, innate immune responses lead into, and stimulate, adaptive immune responses. The genetic control of innate and acquired immunity is now a large and flourishing discipline.
Humoral and cell-mediated immune responses limit malaria parasite multiplication, and many cytokines contribute to the pathogenesis of malaria as well as to the resolution of infections.
Evolution
The acquired immune system, which has been best-studied in mammals, originated in jawed fish approximately 500 million years ago. Most of the molecules, cells, tissues, and associated mechanisms of this system of defense are found in cartilaginous fishes. Lymphocyte receptors, Ig and TCR, are found in all jawed vertebrates. The most ancient Ig class, IgM, is membrane-bound and then secreted upon stimulation of cartilaginous fish B cells. Another isotype, shark IgW, is related to mammalian IgD. TCRs, both α/β and γ/δ, are found in all animals from gnathostomes to mammals. The organization of gene segments that undergo gene rearrangement differs in cartilaginous fishes, which have a cluster form as compared to the translocon form in bony fish to mammals. Like TCR and Ig, the MHC is found only in jawed vertebrates. Genes involved in antigen processing and presentation, as well as the class I and class II genes, are closely linked within the MHC of almost all studied species.
Lymphoid cells can be identified in some pre-vertebrate deuterostomes (i.e., sea urchins). These bind antigen with pattern recognition receptors (PRRs) of the innate immune system. In jawless fishes, two subsets of lymphocytes use variable lymphocyte receptors (VLRs) for antigen binding. Diversity is generated by a cytosine deaminase-mediated rearrangement of LRR-based DNA segments. There is no evidence for the recombination-activating genes (RAGs) that rearrange Ig and TCR gene segments in jawed vertebrates.
The evolution of the AIS, based on Ig, TCR, and MHC molecules, is thought to have arisen from two major evolutionary events: the transfer of the RAG transposon (possibly of viral origin) and two whole genome duplications. Though the molecules of the AIS are well-conserved, they are also rapidly evolving. Yet, a comparative approach finds that many features are quite uniform across taxa. All the major features of the AIS arose early and quickly. Jawless fishes have a different AIS that relies on gene rearrangement to generate diverse immune receptors with a functional dichotomy that parallels Ig and TCR molecules. The innate immune system, which has an important role in AIS activation, is the most important defense system of invertebrates and plants.
Types of acquired immunity
Immunity can be acquired either actively or passively. Immunity is acquired actively when a person is exposed to foreign substances and the immune system responds. Passive immunity is when antibodies are transferred from one host to another. Both actively acquired and passively acquired immunity can be obtained by natural or artificial means.
Naturally Acquired Active Immunity – when a person is naturally exposed to antigens, becomes ill, then recovers.
Naturally Acquired Passive Immunity – involves a natural transfer of antibodies from a mother to her infant. The antibodies cross the woman's placenta to the fetus. Antibodies can also be transferred through breast milk with the secretions of colostrum.
Artificially Acquired Active Immunity – is done by vaccination (introducing dead or weakened antigen to the host's cell).
Artificially Acquired Passive Immunity – This involves the introduction of antibodies rather than antigens to the human body. These antibodies are from an animal or person who is already immune to the disease.
| Biology and health sciences | Immune system | Biology |
190166 | https://en.wikipedia.org/wiki/Burbot | Burbot | The burbot (Lota lota), also known as bubbot, mariah, loche, cusk, freshwater cod, freshwater ling, freshwater cusk, the lawyer, coney-fish, lingcod, or eelpout, is a species of coldwater ray-finned fish native to the subarctic regions of the Northern hemisphere. It is the only member of the genus Lota, and is the only freshwater species of the order Gadiformes. The species is closely related to marine fish such as the common ling and cusk, all of which belong to the family Lotidae (rocklings).
Etymology
The name burbot comes from the Latin word barba, meaning beard, referring to its single chin whisker, or barbel. Its generic and specific names, Lota lota, comes from the old French lotte fish, which is also named "barbot" in Old French.
Description
With an appearance like a cross between a catfish and an eel, the burbot has a serpent-like body, but is easily distinguished by a single barbel on the chin. The body is elongated and laterally compressed, with a flattened head and single, tube-like projection for each nostril. The mouth is wide, with both upper and lower jaws having many small teeth. Burbot have two soft dorsal fins, with the first being low and short, and the second being much longer. The anal fin is low and almost as long as the dorsal fin. The caudal fin is rounded, the pectoral fins are fan-shaped, and pelvic fins are narrow with an elongated second fin ray. Having such small fins relative to body size indicates a benthic lifestyle with low swimming endurance, unable to withstand strong currents.
Geographic distribution
Burbot have circumpolar distribution above 40° N. Populations are continuous from France across Europe and chiefly Russian Asia to the Bering Strait. In North America, burbot range from the Seward Peninsula in Alaska to New Brunswick along the Atlantic Coast. Burbot are most common in streams and lakes of North America and Europe. They are fairly common in Lake Erie, but are also found in the other Great Lakes. An anadromous population also lives in the brackish waters of the Baltic Sea. Recent genetic analysis suggests the geographic pattern of burbot may indicate multiple species or subspecies, making this single taxon somewhat misleading.
United Kingdom
In the United Kingdom, the burbot is possibly extinct. The last recorded capture was a specimen weighing , in July 1970, by Stephen Mackinder, from the Cut-off Channel or the Great Ouse Relief Channel, at Denver, Norfolk. In October 1970, it was described in the Guinness Book of Records as the "rarest British fish" which was "almost extinct", so it had been "agreed that no record for this species should be published, at least until 1974, in the interests of conservation". The burbot may still survive in the UK. The counties of Cambridgeshire, Norfolk and Yorkshire (particularly the River Derwent or Yorkshire Ouse) seem to be the strongest candidates for areas in which the species might yet survive. Plans to reintroduce this freshwater member of the cod family back into British waters are under investigation.
Ecology
Habitat
Burbot live in large, cold rivers, lakes, and reservoirs, primarily preferring freshwater habitats, but able to thrive in brackish environments for spawning. For some time of the year, the burbot lives under ice, and it requires frigid temperatures to breed. During the summer, they are typically found in the colder water below the thermocline. In Lake Superior, burbot can live at depths below . As benthic fish, they tolerate an array of substrate types, including mud, sand, rubble, boulder, silt, and gravel, for feeding. Adults construct extensive burrows in the substrate for shelter during the day. Burbot are active crepuscular hunters. Burbot populations are adfluvial during the winter, and they migrate to near-shore reefs and shoals to spawn, preferring spawning grounds of sand or gravel.
Life history
Burbot reach sexual maturity at between four and seven years of age. Spawning season typically occurs between December and March, often under ice at extremely low temperatures ranging between 1 and 4 °C. During a relatively short season lasting from two to three weeks, burbot spawn multiple times, but not every year.
As broadcast spawners, burbot do not have an explicit nesting site, but rather release eggs and sperm into the water column to drift and settle. When spawning, many male burbot gather around one or two females, forming a spawning ball. Writhing in the open water, males and females simultaneously release sperm and eggs. Depending on water temperatures, the incubation period of the eggs lasts from 30 to 128 days. Fertilized eggs then drift until they settle into cracks and voids in the substrate.
Depending on body size, female burbot fecundity ranges from 63,000 to 3,478,000 eggs for each clutch. Rate of growth, longevity, and age of sexual maturity of burbot are strongly correlated with water temperature; large, older individuals produce more eggs than small, younger individuals. Eggs are round with a large oil globule, about in diameter and have an optimal incubation range between .
Newly hatched burbot larvae are pelagic, passively drifting in the open water. Habitats near are optimal for burbot and they prefer water temperatures of and lower. By night, juveniles are active, taking shelter during the day under rocks and other debris. Growing rapidly in their first year, burbot reach between in total length by late fall. During their second year of life, burbot on average grow another .
Burbot transition from pelagic habitats to benthic environments as they reach adulthood, around five years old. Average length of burbot by maturity is about , with slight sexual dimorphism. Maximum lengths range between , and weights range from .
Diet and predators
At the larval stage, month-old burbot begin exogenous feeding, consuming food through the mouth and digesting in the intestines. Burbot at the larval stage and into the juvenile stage feed on invertebrates based on size. Under , burbot eat copepods and cladocerans, and above , zooplankton and amphipods. As adults, they are primarily piscivores, preying on lamprey, whitefish, grayling, young northern pike, suckers, stickleback, trout, and perch. At times, burbot also eat insects and other macroinvertebrates, and have been known to eat frogs, snakes, and birds. Having such a wide diet is also correlated to their tendency to bite lures, making them very easy to catch. Burbot are preyed upon by northern pike, muskellunge, and some lamprey species.
Commercial significance
A book written in 1590 in England notes that burbot were so common that they were used to feed hogs.
The burbot is edible. In Finland, its roe and liver are highly regarded as delicacies, as is the fish itself. An annual spearfishing tournament is held near Roblin, Manitoba. One of the highlights of the tournament is the fish fry, where the day's catch is served deep-fried. When cooked, burbot meat tastes very similar to American lobster, leading to the burbot's nickname of "poor man's lobster".
In the 1920s, Minnesota druggist Theodore "Ted" H. Rowell and his father, Joseph Rowell, a commercial fisherman on Lake of the Woods, were using the burbot as feed for the foxes on Joe's blue fox farm. They discovered the burbot contained something that improved the quality of the foxes' furs; this was confirmed by the fur buyers, who commented that these furs were superior to other blue fox furs they were seeing. Ted Rowell felt it was something in the burbot, so he extracted some oil and sent it away to be assayed. The result of the assay was that the liver of the burbot has three to four times the potency in vitamin D, and four to 10 times in vitamin A, than "good grades" of cod-liver oil. Their vitamin content varies from lake to lake, where their diets may have some variation. Additionally, liver makes up about 10% of the fish's total body weight, and its liver is six times the size of those of freshwater fish of comparable size. The oil is lower in viscosity, and more rapidly digested and assimilated than most other fish-liver oils. Rowell went on to found the Burbot Liver Products Company, which later became Rowell Laboratories, Inc.
Angling
The IGFA recognizes the world-record burbot as caught on Lake Diefenbaker, Saskatchewan, Canada, by Sean Konrad on 27 March 2010. The fish weighed .
The burbot is a tenacious predator, which sometimes attacks other fish of almost the same size, and as such, can be a nuisance fish in waters where it is not native. Recent discoveries of burbot in the Green River at Flaming Gorge Reservoir in Utah have concerned wildlife biologists, who fear the burbot could decimate the sport-fish population in what is recognized as one of the world's top brown trout fisheries, because it often feeds on the eggs of other fish in the lake, such as sockeye salmon. The Utah Division of Fish and Game has instituted a "no release" "catch and kill" regulation for the burbot in Utah waterways. However, the regulations have been found to be largely unenforceable.
The town of Walker, Minnesota, holds an International Eelpout Festival every winter on Leech Lake. The festival received national attention on 4 March 2011, when a correspondent from The Tonight Show with Jay Leno did a segment on the event.
Conservation status
Burbot populations are difficult to study, due to their deep habitats and reproduction under ice. Although burbot global distribution is widespread and abundant, many populations have been threatened or extirpated. Ichthyologists and taxonomists are strongly advising to look into the old taxonomical due to new genetic insights there are two species of burbot: the European burbot (Lota lota) and the North-American burbot (Lota maculosa).
As the burbot lacks popularity in commercial fishing, many regions do not even consider management plans. Pollution and habitat change, such as river damming, appear to be the primary causes for riverine burbot population declines, while pollution and the adverse effects of invasive species have the greatest influence on lacustrine populations. Management of burbot is on low priority, being nonexistent in some regions.
The Kootenai tribe of Idaho and their partners engaged conservation efforts to the burbot populations.
| Biology and health sciences | Acanthomorpha | Animals |
190450 | https://en.wikipedia.org/wiki/Appropriate%20technology | Appropriate technology | Appropriate technology is a movement (and its manifestations) encompassing technological choice and application that is small-scale, affordable by its users, labor-intensive, energy-efficient, environmentally sustainable, and locally autonomous. It was originally articulated as intermediate technology by the economist Ernst Friedrich "Fritz" Schumacher in his work Small Is Beautiful. Both Schumacher and many modern-day proponents of appropriate technology also emphasize the technology as people-centered.
Appropriate technology has been used to address issues in a wide range of fields. Well-known examples of appropriate technology applications include: bike- and hand-powered water pumps (and other self-powered equipment), the bicycle, the universal nut sheller, self-contained solar lamps and streetlights, and passive solar building designs. Today appropriate technology is often developed using open source principles, which have led to open-source appropriate technology (OSAT) and thus many of the plans of the technology can be freely found on the Internet. OSAT has been proposed as a new model of enabling innovation for sustainable development.
Appropriate technology is most commonly discussed in its relationship to economic development and as an alternative to technology transfer of more capital-intensive technology from industrialized nations to developing countries. However, appropriate technology movements can be found in both developing and developed countries. In developed countries, the appropriate technology movement grew out of the energy crisis of the 1970s and focuses mainly on environmental and sustainability issues. Today the idea is multifaceted; in some contexts, appropriate technology can be described as the simplest level of technology that can achieve the intended purpose, whereas in others, it can refer to engineering that takes adequate consideration of social and environmental ramifications. The facets are connected through robustness and sustainable living.
History
Predecessors
Indian ideological leader Mahatma Gandhi is often cited as the "father" of the appropriate technology movement. Though the concept had not been given a name, Gandhi advocated for small, local and predominantly village-based technology to help India's villages become self-reliant. He disagreed with the idea of technology that benefited a minority of people at the expense of the majority or that put people out of work to increase profit. In 1925 Gandhi founded the All-India Spinners Association and in 1935 he retired from politics to form the All-India Village Industries Association. Both organizations focused on village-based technology similar to the future appropriate technology movement.
China also implemented policies similar to appropriate technology during the reign of Mao Zedong and the following Cultural Revolution. During the Cultural Revolution, development policies based on the idea of "walking on two legs" advocated the development of both large-scale factories and small-scale village industries.
E. F. Schumacher
Despite these early examples, Dr. Ernst Friedrich "Fritz" Schumacher is credited as the founder of the appropriate technology movement. A well-known economist, Schumacher worked for the British National Coal Board for more than 20 years, where he blamed the size of the industry's operations for its uncaring response to the harm black-lung disease inflicted on the miners. However it was his work with developing countries, such as India and Burma, which helped Schumacher form the underlying principles of appropriate technology.
Schumacher first articulated the idea of "intermediate technology," now known as appropriate technology, in a 1962 report to the Indian Planning Commission in which he described India as long in labor and short in capital, calling for an "intermediate industrial technology" that harnessed India's labor surplus. Schumacher had been developing the idea of intermediate technology for several years prior to the Planning Commission report. In 1955, following a stint as an economic advisor to the government of Burma, he published the short paper "Economics in a Buddhist Country," his first known critique of the effects of Western economics on developing countries. In addition to Buddhism, Schumacher also credited his ideas to Gandhi.
Initially, Schumacher's ideas were rejected by both the Indian government and leading development economists. Spurred to action over concern the idea of intermediate technology would languish, Schumacher, George McRobie, Mansur Hoda and Julia Porter brought together a group of approximately 20 people to form the Intermediate Technology Development Group (ITDG) in May 1965. Later that year, a Schumacher article published in The Observer garnered significant attention and support for the group. In 1967, the group published the Tools for Progress: A Guide to Small-scale Equipment for Rural Development and sold 7,000 copies. ITDG also formed panels of experts and practitioners around specific technological needs (such as building construction, energy and water) to develop intermediate technologies to address those needs. At a conference hosted by the ITDG in 1968 the term "intermediate technology" was discarded in favor of the term "appropriate technology" used today. Intermediate technology had been criticized as suggesting the technology was inferior to advanced (or high) technology and not including the social and political factors included in the concept put forth by the proponents. In 1973, Schumacher described the concept of appropriate technology to a mass audience in his influential work Small Is Beautiful: A Study of Economics As If People Mattered.
Growing trend
Between 1966 and 1975 the number of new appropriate technology organizations founded each year was three times greater than the previous nine years. There was also an increase in organizations focusing on applying appropriate technology to the problems of industrialized nations, particularly issues related to energy and the environment. In 1977, the OECD identified in its Appropriate Technology Directory 680 organizations involved in the development and promotion of appropriate technology. By 1980, this number had grown to more than 1,000. International agencies and government departments were also emerging as major innovators in appropriate technology, indicating its progression from a small movement fighting against the established norms to a legitimate technological choice supported by the establishment. For example, the Inter-American Development Bank created a Committee for the Application of Intermediate Technology in 1976 and the World Health Organization established the Appropriate Technology for Health Program in 1977.
Appropriate technology was also increasingly applied in developed countries. For example, the energy crisis of the mid-1970s led to the creation of the National Center for Appropriate Technology (NCAT) in 1977 with an initial appropriation of 3 million dollars from the U.S. Congress. The Center sponsored appropriate technology demonstrations to "help low-income communities find better ways to do things that will improve the quality of life, and that will be doable with the skills and resources at hand." However, by 1981 the NCAT's funding agency, Community Services Administration, had been abolished. For several decades NCAT worked with the US departments of Energy and Agriculture on contract to develop appropriate technology programs. Since 2005, NCAT's informational web site is no longer funded by the US government.
Decline
In more recent years, the appropriate technology movement has continued to decline in prominence. The German Appropriate Technology Exchange (GATE) and Holland's Technology Transfer for Development (TOOL) are examples of organizations no longer in operation. Recently, a study looked at the continued barriers to AT deployment despite the relatively low cost of transferring information in the internet age. The barriers have been identified as: AT seen as inferior or "poor person's" technology, technical transferability and robustness of AT, insufficient funding, weak institutional support, and the challenges of distance and time in tackling rural poverty.
A more free market-centric view has also begun to dominate the field. For example, Paul Polak, founder of International Development Enterprises (an organization that designs and manufactures products that follow the ideals of appropriate technology), declared appropriate technology dead in a 2010 blog post.
Polak argues the "design for the other 90 percent" movement has replaced appropriate technology. Growing out of the appropriate technology movement, designing for the other 90 percent advocates the creation of low-cost solutions for the 5.8 billion of the world's 6.8 billion population "who have little or no access to most of the products and services many of us take for granted."
Many of the ideas integral to appropriate technology can now be found in the increasingly popular "sustainable development" movement, which among many tenets advocates technological choice that meets human needs while preserving the environment for future generations. In 1983, the OECD published the results of an extensive survey of appropriate technology organizations titled, The World of Appropriate Technology, in which it defined appropriate technology as characterized by "low investment cost per work-place, low capital investment per unit of output, organizational simplicity, high adaptability to a particular social or cultural environment, sparing use of natural resources, low cost of final product or high potential for employment." Today, the OECD web site redirects from the "Glossary of Statistical Terms" entry on "appropriate technology" to "environmentally sound technologies." The United Nations' "Index to Economic and Social Development" also redirects from the "appropriate technology" entry to "sustainable development."
Potential resurgence
Despite the decline, several appropriate technology organizations are still in existence, including the ITDG which became Practical Action after a name change in 2005. Skat (Schweizerische Kontaktstelle für Angepasste Technology) adapted by becoming a private consultancy in 1998, though some Intermediate Technology activities are continued by Skat Foundation through the Rural Water Supply Network (RWSN). Another actor still very active is the charity CEAS (Centre Ecologique Albert Schweitzer). A pioneer in food transformation and solar heaters, it offers vocational training in West Africa and Madagascar. There is also currently a notable resurgence as viewed by the number of groups adopting open source appropriate technology (OSAT) because of the enabling technology of the Internet. These OSAT groups include: Akvo Foundation, Appropedia, The Appropriate Technology Collaborative, Catalytic Communities, Centre for Alternative Technology, Center For Development Alternatives, Engineers Without Borders, Open Source Ecology, Practical Action, and Village Earth. Most recently ASME, Engineers Without Borders (USA) and the IEEE have joined together to produce Engineering for Change, which facilitates the development of affordable, locally appropriate and sustainable solutions to the most pressing humanitarian challenges.
Terminology
Appropriate technology frequently serves as an umbrella term for a variety names for this type of technology. Frequently these terms are used interchangeably; however, the use of one term over another can indicate the specific focus, bias or agenda of the technological choice in question. Though the original name for the concept now known as appropriate technology, "intermediate technology" is now often considered a subset of appropriate technology that focuses on technology that is more productive than "inefficient" traditional technologies, but less costly than the technology of industrialized societies. Other types of technology under the appropriate technology umbrella include:
Capital-saving technology
Mid-tech
Labor-intensive technology
Alternate technology
Self-help technology
Village-level technology
Community technology
Progressive technology
Indigenous technology
People's technology
Light-engineering technology
Adaptive technology
Light-capital technology
Soft technology
A variety of competing definitions exist in academic literature and organization and government policy papers for each of these terms. However, the general consensus is appropriate technology encompasses the ideas represented by the above list. Furthermore, the use of one term over another in referring to an appropriate technology can indicate ideological bias or emphasis on particular economic or social variables. Some terms inherently emphasize the importance of increased employment and labor utilization (such as labor-intensive or capital-saving technology), while others may emphasize the importance of human development (such as self-help and people's technology).
It is also possible to distinguish between hard and soft technologies. According to Dr. Maurice Albertson and Audrey Faulkner, appropriate hard technology is "engineering techniques, physical structures, and machinery that meet a need defined by a community, and utilize the material at hand or readily available. It can be built, operated and maintained by the local people with very limited outside assistance (e.g., technical, material, or financial). it is usually related to an economic goal."
Albertson and Faulkner consider appropriate soft technology as technology that deals with "the social structures, human interactive processes, and motivation techniques. It is the structure and process for social participation and action by individuals and groups in analyzing situations, making choices and engaging in choice-implementing behaviors that bring about change."
A closely related concept is social technology, defined as "products, techniques and/or re-applicable methodologies developed in the interaction with the community and that must represent effective solution in terms of social transformation". Further, Kostakis et al. propose a mid-tech approach to distinguish between low-tech and hi-tech polarities. Inspired by E.F. Schumacher, they argue that mid-tech could be understood as an inclusive middle that may go beyond the two polarities, combining the efficiency and versatility of digital/automated technology with low-tech's potential for autonomy and resilience.
Practitioners
Some of the well known practitioners of the appropriate technology sector include:
B.V. Doshi, Buckminster Fuller, William Moyer (1933–2002), Amory Lovins, Sanoussi Diakité, Albert Bates, Victor Papanek, Giorgio Ceragioli (1930–2008), Frithjof Bergmann, Arne Næss, (1912–2009), Mansur Hoda, and Laurie Baker.
Development
Schumacher's initial concept of intermediate technology was created as a critique of the currently prevailing development strategies which focused on maximizing aggregate economic growth through increases to overall measurements of a country's economy, such as gross domestic product (GDP). Developed countries became aware of the situation of developing countries during and in the years following World War II. Based on the continuing rise in income levels in Western countries since the Industrial Revolution, developed countries embarked on a campaign of massive transfers of capital and technology to developing countries in order to force a rapid industrialization intended to result in an economic "take-off" in the developing countries.
However, by the late 1960s it was becoming clear this development method had not worked as expected and a growing number of development experts and national policy makers were recognizing it as a potential cause of increasing poverty and income inequality in developing countries. In many countries, this influx of technology had increased the overall economic capacity of the country. However, it had created a dual or two-tiered economy with pronounced division between the classes. The foreign technology imports were only benefiting a small minority of urban elites. This was also increasing urbanization with the rural poor moving to urban cities in hope of more financial opportunities. The increased strain on urban infrastructures and public services led to "increasing squalor, severe impacts on public health and distortions in the social structure."
Appropriate technology was meant to address four problems: extreme poverty, starvation, unemployment and urban migration. Schumacher saw the main purpose for economic development programs was the eradication of extreme poverty and he saw a clear connection between mass unemployment and extreme poverty. Schumacher sought to shift development efforts from a bias towards urban areas and on increasing the output per laborer to focusing on rural areas (where a majority of the population still lived) and on increasing employment.
In developed countries
The term appropriate technology is also used in developed nations to describe the use of technology and engineering that result in less negative impacts on the environment and society, i.e., technology should be both environmentally sustainable and socially appropriate. E. F. Schumacher asserts that such technology, described in the book Small Is Beautiful, tends to promote values such as health, beauty and permanence, in that order.
Often the type of appropriate technology that is used in developed countries is "appropriate and sustainable technology" (AST), appropriate technology that, besides being functional and relatively cheap (though often more expensive than true AT), is durable and employs renewable resources. AT does not include this (see Sustainable design).
Applications
Determining a sustainable approach
Features such as low cost, low usage of fossil fuels and use of locally available resources can give some advantages in terms of sustainability. For that reason, these technologies are sometimes used and promoted by advocates of sustainability and alternative technology.
Besides using natural, locally available resources (e.g., wood or adobe), waste materials imported from cities using conventional (and inefficient) waste management may be gathered and re-used to build a sustainable living environment. Use of these cities' waste material allows the gathering of a huge amount of building material at a low cost. When obtained, the materials may be recycled over and over in the own city/community, using the cradle to cradle design method. Locations where waste can be found include landfills, junkyards, on water surfaces and anywhere around towns or near highways. Organic waste that can be reused to fertilise plants can be found in sewages. Also, town districts and other places (e.g., cemeteries) that are subject of undergoing renovation or removal can be used for gathering materials as stone, concrete, or potassium.
Related social movements
Community-based economics
Cosmopolitan localism
Campus Center for Appropriate Technology (CCAT)
National Center for Appropriate Technology
Alternative propulsion
Alternative technology
DIY culture
Eco-village
Frugal innovation
Jugaad
Maker Movement
Myth of Progress
Open Source Appropriate Technology
Permaculture
Practical Action (charity formerly known as Intermediate Technology)
Principles of Intelligent Urbanism
Social entrepreneurship
Sustainable development
Tools for Conviviality
Green syndicalism
Lifehacking
Small Is Beautiful
The Appropriate Technology Collaborative
| Technology | General | null |
190835 | https://en.wikipedia.org/wiki/Coevolution | Coevolution | In biology, coevolution occurs when two or more species reciprocally affect each other's evolution through the process of natural selection. The term sometimes is used for two traits in the same species affecting each other's evolution, as well as gene-culture coevolution.
Charles Darwin mentioned evolutionary interactions between flowering plants and insects in On the Origin of Species (1859). Although he did not use the word coevolution, he suggested how plants and insects could evolve through reciprocal evolutionary changes. Naturalists in the late 1800s studied other examples of how interactions among species could result in reciprocal evolutionary change. Beginning in the 1940s, plant pathologists developed breeding programs that were examples of human-induced coevolution. Development of new crop plant varieties that were resistant to some diseases favored rapid evolution in pathogen populations to overcome those plant defenses. That, in turn, required the development of yet new resistant crop plant varieties, producing an ongoing cycle of reciprocal evolution in crop plants and diseases that continues to this day.
Coevolution as a major topic for study in nature expanded rapidly from the 1960s, when Daniel H. Janzen showed coevolution between acacias and ants (see below) and Paul R. Ehrlich and Peter H. Raven suggested how coevolution between plants and butterflies may have contributed to the diversification of species in both groups. The theoretical underpinnings of coevolution are now well-developed (e.g., the geographic mosaic theory of coevolution), and demonstrate that coevolution can play an important role in driving major evolutionary transitions such as the evolution of sexual reproduction or shifts in ploidy. More recently, it has also been demonstrated that coevolution can influence the structure and function of ecological communities, the evolution of groups of mutualists such as plants and their pollinators, and the dynamics of infectious disease.
Each party in a coevolutionary relationship exerts selective pressures on the other, thereby affecting each other's evolution. Coevolution includes many forms of mutualism, host-parasite, and predator-prey relationships between species, as well as competition within or between species. In many cases, the selective pressures drive an evolutionary arms race between the species involved. Pairwise or specific coevolution, between exactly two species, is not the only possibility; in multi-species coevolution, which is sometimes called guild or diffuse coevolution, several to many species may evolve a trait or a group of traits in reciprocity with a set of traits in another species, as has happened between the flowering plants and pollinating insects such as bees, flies, and beetles. There are a suite of specific hypotheses on the mechanisms by which groups of species coevolve with each other.
Coevolution is primarily a biological concept, but researchers have applied it by analogy to fields such as computer science, sociology, and astronomy.
Mutualism
Coevolution is the evolution of two or more species which reciprocally affect each other, sometimes creating a mutualistic relationship between the species. Such relationships can be of many different types.
Flowering plants
Flowers appeared and diversified relatively suddenly in the fossil record, creating what Charles Darwin described as the "abominable mystery" of how they had evolved so quickly; he considered whether coevolution could be the explanation. He first mentioned coevolution as a possibility in On the Origin of Species, and developed the concept further in Fertilisation of Orchids (1862).
Insects and insect-pollinated flowers
Modern insect-pollinated (entomophilous) flowers are conspicuously coadapted with insects to ensure pollination and in return to reward the pollinators with nectar and pollen. The two groups have coevolved for over 100 million years, creating a complex network of interactions. Either they evolved together, or at some later stages they came together, likely with pre-adaptations, and became mutually adapted.
Several highly successful insect groups—especially the Hymenoptera (wasps, bees and ants) and Lepidoptera (butterflies and moths) as well as many types of Diptera (flies) and Coleoptera (beetles)—evolved in conjunction with flowering plants during the Cretaceous (145 to 66 million years ago). The earliest bees, important pollinators today, appeared in the early Cretaceous. A group of wasps sister to the bees evolved at the same time as flowering plants, as did the Lepidoptera. Further, all the major clades of bees first appeared between the middle and late Cretaceous, simultaneously with the adaptive radiation of the eudicots (three quarters of all angiosperms), and at the time when the angiosperms became the world's dominant plants on land.
At least three aspects of flowers appear to have coevolved between flowering plants and insects, because they involve communication between these organisms. Firstly, flowers communicate with their pollinators by scent; insects use this scent to determine how far away a flower is, to approach it, and to identify where to land and finally to feed. Secondly, flowers attract insects with patterns of stripes leading to the rewards of nectar and pollen, and colours such as blue and ultraviolet, to which their eyes are sensitive; in contrast, bird-pollinated flowers tend to be red or orange. Thirdly, flowers such as some orchids mimic females of particular insects, deceiving males into pseudocopulation.
The yucca, Yucca whipplei, is pollinated exclusively by Tegeticula maculata, a yucca moth that depends on the yucca for survival. The moth eats the seeds of the plant, while gathering pollen. The pollen has evolved to become very sticky, and remains on the mouth parts when the moth moves to the next flower. The yucca provides a place for the moth to lay its eggs, deep within the flower away from potential predators.
Birds and bird-pollinated flowers
Hummingbirds and ornithophilous (bird-pollinated) flowers have evolved a mutualistic relationship. The flowers have nectar suited to the birds' diet, their color suits the birds' vision and their shape fits that of the birds' bills. The blooming times of the flowers have also been found to coincide with hummingbirds' breeding seasons. The floral characteristics of ornithophilous plants vary greatly among each other compared to closely related insect-pollinated species. These flowers also tend to be more ornate, complex, and showy than their insect pollinated counterparts. It is generally agreed that plants formed coevolutionary relationships with insects first, and ornithophilous species diverged at a later time. There is not much scientific support for instances of the reverse of this divergence: from ornithophily to insect pollination. The diversity in floral phenotype in ornithophilous species, and the relative consistency observed in bee-pollinated species can be attributed to the direction of the shift in pollinator preference.
Flowers have converged to take advantage of similar birds. Flowers compete for pollinators, and adaptations reduce unfavourable effects of this competition. The fact that birds can fly during inclement weather makes them more efficient pollinators where bees and other insects would be inactive. Ornithophily may have arisen for this reason in isolated environments with poor insect colonization or areas with plants which flower in the winter. Bird-pollinated flowers usually have higher volumes of nectar and higher sugar production than those pollinated by insects. This meets the birds' high energy requirements, the most important determinants of flower choice. In Mimulus, an increase in red pigment in petals and flower nectar volume noticeably reduces the proportion of pollination by bees as opposed to hummingbirds; while greater flower surface area increases bee pollination. Therefore, red pigments in the flowers of Mimulus cardinalis may function primarily to discourage bee visitation. In Penstemon, flower traits that discourage bee pollination may be more influential on the flowers' evolutionary change than 'pro-bird' adaptations, but adaptation 'towards' birds and 'away' from bees can happen simultaneously. However, some flowers such as Heliconia angusta appear not to be as specifically ornithophilous as had been supposed: the species is occasionally (151 visits in 120 hours of observation) visited by Trigona stingless bees. These bees are largely pollen robbers in this case, but may also serve as pollinators.
Following their respective breeding seasons, several species of hummingbirds occur at the same locations in North America, and several hummingbird flowers bloom simultaneously in these habitats. These flowers have converged to a common morphology and color because these are effective at attracting the birds. Different lengths and curvatures of the corolla tubes can affect the efficiency of extraction in hummingbird species in relation to differences in bill morphology. Tubular flowers force a bird to orient its bill in a particular way when probing the flower, especially when the bill and corolla are both curved. This allows the plant to place pollen on a certain part of the bird's body, permitting a variety of morphological co-adaptations.
Ornithophilous flowers need to be conspicuous to birds. Birds have their greatest spectral sensitivity and finest hue discrimination at the red end of the visual spectrum, so red is particularly conspicuous to them. Hummingbirds may also be able to see ultraviolet "colors". The prevalence of ultraviolet patterns and nectar guides in nectar-poor entomophilous (insect-pollinated) flowers warns the bird to avoid these flowers. Each of the two subfamilies of hummingbirds, the Phaethornithinae (hermits) and the Trochilinae, has evolved in conjunction with a particular set of flowers. Most Phaethornithinae species are associated with large monocotyledonous herbs, while the Trochilinae prefer dicotyledonous plant species.
Fig reproduction and fig wasps
The genus Ficus is composed of 800 species of vines, shrubs, and trees, including the cultivated fig, defined by their syconia, the fruit-like vessels that either hold female flowers or pollen on the inside. Each fig species has its own fig wasp which (in most cases) pollinates the fig, so a tight mutual dependence has evolved and persisted throughout the genus.
Acacia ants and acacias
The acacia ant (Pseudomyrmex ferruginea) is an obligate plant ant that protects at least five species of "Acacia" (Vachellia) from preying insects and from other plants competing for sunlight, and the tree provides nourishment and shelter for the ant and its larvae. Such mutualism is not automatic: other ant species exploit trees without reciprocating, following different evolutionary strategies. These cheater ants impose important host costs via damage to tree reproductive organs, though their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast.
Hosts and parasites
Parasites and sexually reproducing hosts
Host–parasite coevolution is the coevolution of a host and a parasite. A general characteristic of many viruses, as obligate parasites, is that they coevolved alongside their respective hosts. Correlated mutations between the two species enter them into an evolution arms race. Whichever organism, host or parasite, that cannot keep up with the other will be eliminated from their habitat, as the species with the higher average population fitness survives. This race is known as the Red Queen hypothesis. The Red Queen hypothesis predicts that sexual reproduction allows a host to stay just ahead of its parasite, similar to the Red Queen's race in Through the Looking-Glass: "it takes all the running you can do, to keep in the same place". The host reproduces sexually, producing some offspring with immunity over its parasite, which then evolves in response.
The parasite–host relationship probably drove the prevalence of sexual reproduction over the more efficient asexual reproduction. It seems that when a parasite infects a host, sexual reproduction affords a better chance of developing resistance (through variation in the next generation), giving sexual reproduction variability for fitness not seen in the asexual reproduction, which produces another generation of the organism susceptible to infection by the same parasite. Coevolution between host and parasite may accordingly be responsible for much of the genetic diversity seen in normal populations, including blood-plasma polymorphism, protein polymorphism, and histocompatibility systems.
Brood parasites
Brood parasitism demonstrates close coevolution of host and parasite, for example in some cuckoos. These birds do not make their own nests, but lay their eggs in nests of other species, ejecting or killing the eggs and young of the host and thus having a strong negative impact on the host's reproductive fitness. Their eggs are camouflaged as eggs of their hosts, implying that hosts can distinguish their own eggs from those of intruders and are in an evolutionary arms race with the cuckoo between camouflage and recognition. Cuckoos are counter-adapted to host defences with features such as thickened eggshells, shorter incubation (so their young hatch first), and flat backs adapted to lift eggs out of the nest.
Antagonistic coevolution
Antagonistic coevolution is seen in the harvester ant species Pogonomyrmex barbatus and Pogonomyrmex rugosus, in a relationship both parasitic and mutualistic. The queens are unable to produce worker ants by mating with their own species. Only by crossbreeding can they produce workers. The winged females act as parasites for the males of the other species as their sperm will only produce sterile hybrids. But because the colonies are fully dependent on these hybrids to survive, it is also mutualistic. While there is no genetic exchange between the species, they are unable to evolve in a direction where they become too genetically different as this would make crossbreeding impossible.
Predators and prey
Predators and prey interact and coevolve: the predator to catch the prey more effectively, the prey to escape. The coevolution of the two mutually imposes selective pressures. These often lead to an evolutionary arms race between prey and predator, resulting in anti-predator adaptations.
The same applies to herbivores, animals that eat plants, and the plants that they eat. Paul R. Ehrlich and Peter H. Raven in 1964 proposed the theory of escape and radiate coevolution to describe the evolutionary diversification of plants and butterflies. In the Rocky Mountains, red squirrels and crossbills (seed-eating birds) compete for seeds of the lodgepole pine. The squirrels get at pine seeds by gnawing through the cone scales, whereas the crossbills get at the seeds by extracting them with their unusual crossed mandibles. In areas where there are squirrels, the lodgepole's cones are heavier, and have fewer seeds and thinner scales, making it more difficult for squirrels to get at the seeds. Conversely, where there are crossbills but no squirrels, the cones are lighter in construction, but have thicker scales, making it more difficult for crossbills to get at the seeds. The lodgepole's cones are in an evolutionary arms race with the two kinds of herbivore.
Competition
Both intraspecific competition, with features such as sexual conflict and sexual selection, and interspecific competition, such as between predators, may be able to drive coevolution.
Intraspecific competition can result in sexual antagonistic coevolution, an evolutionary relationship analogous to an arms race, where the evolutionary fitness of the sexes is counteracted to achieve maximum reproductive success. For example, some insects reproduce using traumatic insemination, which is disadvantageous to the female's health. During mating, males try to maximise their fitness by inseminating as many females as possible, but the more times a female's abdomen is punctured, the less likely she is to survive, reducing her fitness.
Multispecies
The types of coevolution listed so far have been described as if they operated pairwise (also called specific coevolution), in which traits of one species have evolved in direct response to traits of a second species, and vice versa. This is not always the case. Another evolutionary mode arises where evolution is reciprocal, but is among a group of species rather than exactly two. This is variously called guild or diffuse coevolution. For instance, a trait in several species of flowering plant, such as offering its nectar at the end of a long tube, can coevolve with a trait in one or several species of pollinating insects, such as a long proboscis. More generally, flowering plants are pollinated by insects from different families including bees, flies, and beetles, all of which form a broad guild of pollinators which respond to the nectar or pollen produced by flowers.
Geographic mosaic theory
Mosaic coevolution is a theory in which geographic location and community ecology shape differing coevolution between strongly interacting species in multiple populations. These populations may be separated by space and/or time. Depending on the ecological conditions, the interspecific interactions may be mutualistic or antagonistic. In mutualisms, both partners benefit from the interaction, whereas one partner generally experiences decreased fitness in antagonistic interactions. Arms races consist of two species adapting ways to "one up" the other. Several factors affect these relationships, including hot spots, cold spots, and trait mixing. Reciprocal selection occurs when a change in one partner puts pressure on the other partner to change in response. Hot spots are areas of strong reciprocal selection, while cold spots are areas with no reciprocal selection or where only one partner is present. The three constituents of geographic structure that contribute to this particular type of coevolution are: natural selection in the form of a geographic mosaic, hot spots often surrounded by cold spots, and trait remixing by means of genetic drift and gene flow. Mosaic, along with general coevolution, most commonly occurs at the population level and is driven by both the biotic and the abiotic environment. These environmental factors can constrain coevolution and affect how far it can escalate.
Outside biology
Coevolution is primarily a biological concept, but has been applied to other fields by analogy.
In algorithms
Coevolutionary algorithms are used for generating artificial life as well as for optimization, game learning and machine learning. Daniel Hillis added "co-evolving parasites" to prevent an optimization procedure from becoming stuck at local maxima. Karl Sims coevolved virtual creatures.
In architecture
The concept of coevolution was introduced in architecture by the Danish architect-urbanist Henrik Valeur as an antithesis to "star-architecture". As the curator of the Danish Pavilion at the 2006 Venice Biennale of Architecture, he created an exhibition-project on coevolution in urban development in China; it won the Golden Lion for Best National Pavilion.
At the School of Architecture, Planning and Landscape, Newcastle University, a coevolutionary approach to architecture has been defined as a design practice that engages students, volunteers and members of the local community in practical, experimental work aimed at "establishing dynamic processes of learning between users and designers."
In cosmology and astronomy
In his book The Self-organizing Universe, Erich Jantsch attributed the entire evolution of the cosmos to coevolution.
In astronomy, an emerging theory proposes that black holes and galaxies develop in an interdependent way analogous to biological coevolution.
In management and organization studies
Since year 2000, a growing number of management and organization studies discuss coevolution and coevolutionary processes. Even so, Abatecola el al. (2020) reveals a prevailing scarcity in explaining what processes substantially characterize coevolution in these fields, meaning that specific analyses about where this perspective on socio-economic change is, and where it could move toward in the future, are still missing.
In sociology
In Development Betrayed: The End of Progress and A Coevolutionary Revisioning of the Future (1994) Richard Norgaard proposes a coevolutionary cosmology to explain how social and environmental systems influence and reshape each other. In Coevolutionary Economics: The Economy, Society and the Environment (1994) John Gowdy suggests that: "The economy, society, and the environment are linked together in a coevolutionary relationship".
In technology
Computer software and hardware can be considered as two separate components but tied intrinsically by coevolution. Similarly, operating systems and computer applications, web browsers, and web applications. All these systems depend upon each other and advance through a kind of evolutionary process. Changes in hardware, an operating system or web browser may introduce new features that are then incorporated into the corresponding applications running alongside. The idea is closely related to the concept of "joint optimization" in sociotechnical systems analysis and design, where a system is understood to consist of both a "technical system" encompassing the tools and hardware used for production and maintenance, and a "social system" of relationships and procedures through which the technology is tied into the goals of the system and all the other human and organizational relationships within and outside the system. Such systems work best when the technical and social systems are deliberately developed together.
| Biology and health sciences | Basics_4 | Biology |
190919 | https://en.wikipedia.org/wiki/Sunrise | Sunrise | Sunrise (or sunup) is the moment when the upper rim of the Sun appears on the horizon in the morning, at the start of the Sun path. The term can also refer to the entire process of the solar disk crossing the horizon.
Terminology
Although the Sun appears to "rise" from the horizon, it is actually the Earth's motion that causes the Sun to appear. The illusion of a moving Sun results from Earth observers being in a rotating reference frame; this apparent motion caused many cultures to have mythologies and religions built around the geocentric model, which prevailed until astronomer Nicolaus Copernicus formulated his heliocentric model in the 16th century.
Architect Buckminster Fuller proposed the terms "sunsight" and "sunclipse" to better represent the heliocentric model, though the terms have not entered into common language.
Astronomically, sunrise occurs for only an instant, namely the moment at which the upper limb of the Sun appears tangent to the horizon. However, the term sunrise commonly refers to periods of time both before and after this point:
Twilight, the period in the morning during which the sky is brightening, but the Sun is not yet visible. The beginning of morning twilight is called astronomical dawn.
The period after the Sun rises during which striking colors and atmospheric effects are still seen. Civil twilight being the brightest, while astronomical twilight being the darkest.
Measurement
Angle with respect to horizon
The stage of sunrise known as false sunrise actually occurs before the Sun truly reaches the horizon because Earth's atmosphere refracts the Sun's image. At the horizon, the average amount of refraction is 34 arcminutes, though this amount varies based on atmospheric conditions.
Also, unlike most other solar measurements, sunrise occurs when the Sun's upper limb, rather than its center, appears to cross the horizon. The apparent radius of the Sun at the horizon is 16 arcminutes.
These two angles combine to define sunrise to occur when the Sun's center is 50 arcminutes below the horizon, or 90.83° from the zenith.
Time of day
The timing of sunrise varies throughout the year and is also affected by the viewer's latitude and longitude, altitude, and time zone. These changes are driven by the axial tilt of Earth, daily rotation of the Earth, the planet's movement in its annual elliptical orbit around the Sun, and the Earth and Moon's paired revolutions around each other. The analemma can be used to make approximate predictions of the time of sunrise.
In late winter and spring, sunrise as seen from temperate latitudes occurs earlier each day, reaching its earliest time shortly before the summer solstice; although the exact date varies by latitude. After this point, the time of sunrise gets later each day, reaching its latest shortly after the winter solstice, also varying by latitude. The offset between the dates of the solstice and the earliest or latest sunrise time is caused by the eccentricity of Earth's orbit and the tilt of its axis, and is described by the analemma, which can be used to predict the dates.
Variations in atmospheric refraction can alter the time of sunrise by changing its apparent position. Near the poles, the time-of-day variation is extreme, since the Sun crosses the horizon at a very shallow angle and thus rises more slowly.
Accounting for atmospheric refraction and measuring from the leading edge slightly increases the average duration of day relative to night. The sunrise equation, however, which is used to derive the time of sunrise and sunset, uses the Sun's physical center for calculation, neglecting atmospheric refraction and the non-zero angle subtended by the solar disc.
Location on the horizon
Neglecting the effects of refraction and the Sun's non-zero size, whenever sunrise occurs, in temperate regions it is always in the northeast quadrant from the March equinox to the September equinox and in the southeast quadrant from the September equinox to the March equinox. Sunrises occur approximately due east on the March and September equinoxes for all viewers on Earth. Exact calculations of the azimuths of sunrise on other dates are complex, but they can be estimated with reasonable accuracy by using the analemma.
The figure on the right is calculated using the solar geometry routine in Ref. as follows:
For a given latitude and a given date, calculate the declination of the Sun using longitude and solar noon time as inputs to the routine;
Calculate the sunrise hour angle using the sunrise equation;
Calculate the sunrise time, which is the solar noon time minus the sunrise hour angle in degree divided by 15;
Use the sunrise time as input to the solar geometry routine to get the solar azimuth angle at sunrise.
Hemispheric symmetry
An interesting feature in the figure on the right is apparent hemispheric symmetry in regions where daily sunrise and sunset actually occur.
This symmetry becomes clear if the hemispheric relation in to the sunrise equation is applied to the x- and y-components of the solar vector presented in Ref.
Appearance
Colors
Air molecules and airborne particles scatter white sunlight as it passes through the Earth's atmosphere. This is done by a combination of Rayleigh scattering and Mie scattering.
As a ray of white sunlight travels through the atmosphere to an observer, some of the colors are scattered out of the beam by air molecules and airborne particles, changing the final color of the beam the viewer sees. Because the shorter wavelength components, such as blue and green, scatter more strongly, these colors are preferentially removed from the beam.
At sunrise and sunset, when the path through the atmosphere is longer, the blue and green components are removed almost completely, leaving the longer-wavelength orange and red hues seen at those times. The remaining reddened sunlight can then be scattered by cloud droplets and other relatively large particles to light up the horizon red and orange. The removal of the shorter wavelengths of light is due to Rayleigh scattering by air molecules and particles much smaller than the wavelength of visible light (less than 50 nm in diameter). The scattering by cloud droplets and other particles with diameters comparable to or larger than the sunlight's wavelengths (more than 600 nm) is due to Mie scattering and is not strongly wavelength-dependent. Mie scattering is responsible for the light scattered by clouds, and also for the daytime halo of white light around the Sun (forward scattering of white light).
Sunset colors are typically more brilliant than sunrise colors, because the evening air contains more particles than morning air. Ash from volcanic eruptions, trapped within the troposphere, tends to mute sunset and sunrise colors, while volcanic ejecta that is instead lofted into the stratosphere (as thin clouds of tiny sulfuric acid droplets), can yield beautiful post-sunset colors called afterglows and pre-sunrise glows. A number of eruptions, including those of Mount Pinatubo in 1991 and Krakatoa in 1883, have produced sufficiently high stratospheric sulfuric acid clouds to yield remarkable sunset afterglows (and pre-sunrise glows) around the world. The high altitude clouds serve to reflect strongly reddened sunlight still striking the stratosphere after sunset, down to the surface.
Optical illusions and other phenomena
Atmospheric refraction causes the Sun to be seen while it is still below the horizon.
Light from the lower edge of the Sun's disk is refracted more than light from the upper edge. This reduces the apparent height of the Sun when it appears just above the horizon. The width is not affected, so the Sun appears wider than it is high.
The Sun appears larger at sunrise than it does while higher in the sky, in a manner similar to the Moon illusion.
The Sun appears to rise above the horizon and circle the Earth, but it is actually the Earth that is rotating, with the Sun remaining fixed. This effect results from the fact that an observer on Earth is in a rotating reference frame.
Occasionally a false sunrise occurs, demonstrating a very particular kind of parhelion belonging to the optical phenomenon family of halos.
Sometimes just before sunrise or after sunset, a green flash can be seen. This is an optical phenomenon in which a green spot is visible above the Sun, usually for no more than a second or two.
| Physical sciences | Celestial mechanics | Astronomy |
190933 | https://en.wikipedia.org/wiki/Sunset | Sunset | Sunset (or sundown) is the disappearance of the Sun at the end of the Sun path, below the horizon of the Earth (or any other astronomical object in the Solar System) due to its rotation. As viewed from everywhere on Earth, it is a phenomenon that happens approximately once every 24 hours, except in areas close to the poles. The equinox Sun sets due west at the moment of both the spring and autumn equinoxes. As viewed from the Northern Hemisphere, the Sun sets to the northwest (or not at all) in the spring and summer, and to the southwest in the autumn and winter; these seasons are reversed for the Southern Hemisphere.
The time of actual sunset is defined in astronomy as two minutes before the upper limb of the Sun disappears below the horizon. Near the horizon, atmospheric refraction causes sunlight rays to be distorted to such an extent that geometrically the solar disk is already about one diameter below the horizon when a sunset is observed.
Sunset is distinct from twilight, which is divided into three stages. The first one is civil twilight, which begins once the Sun has disappeared below the horizon, and continues until it descends to 6 degrees below the horizon. The early to intermediate stages of twilight coincide with predusk. The second phase is nautical twilight, between 6 and 12 degrees below the horizon. The third phase is astronomical twilight, which is the period when the Sun is between 12 and 18 degrees below the horizon. Dusk is at the very end of astronomical twilight, and is the darkest moment of twilight just before night. Finally, night occurs when the Sun reaches 18 degrees below the horizon and no longer illuminates the sky.
Locations further north than the Arctic Circle and further south than the Antarctic Circle experience no full sunset or sunrise on at least one day of the year, when the polar day or the polar night persists continuously for 24 hours. At latitudes greater than within half a degree of either pole, the sun cannot rise or set on the same date on any day of the year, since the sun's angular elevation between solar noon and midnight is less than one degree.
Occurrence
The time of sunset varies throughout the year and is determined by the viewer's position on Earth, specified by latitude and longitude, altitude, and time zone. Small daily changes and noticeable semi-annual changes in the timing of sunsets are driven by the axial tilt of the Earth, daily rotation of the Earth, the planet's movement in its annual elliptical orbit around the Sun, and the Earth and Moon's paired revolutions around each other. During winter and spring, the days get longer and sunsets occur later every day until the day of the latest sunset, which occurs after the summer solstice. In the Northern Hemisphere, the latest sunset occurs late in June or in early July, but not on the summer solstice of June 21. This date depends on the viewer's latitude (connected with the Earth's slower movement around the aphelion around July 4). Likewise, the earliest sunset does not occur on the winter solstice, but rather about two weeks earlier, again depending on the viewer's latitude. In the Northern Hemisphere, it occurs in early December or late November (influenced by the Earth's faster movement near its perihelion, which occurs around January 3).
Likewise, the same phenomenon exists in the Southern Hemisphere, but with the respective dates reversed, with the earliest sunsets occurring some time before June 21 in winter, and the latest sunsets occurring some time after December 21 in summer, again depending on one's southern latitude. For a few weeks surrounding both solstices, both sunrise and sunset get slightly later each day. Even on the equator, sunrise and sunset shift several minutes back and forth through the year, along with solar noon. These effects are plotted by an analemma.
Neglecting atmospheric refraction and the Sun's non-zero size, whenever and wherever sunset occurs, it is always in the northwest quadrant from the March equinox to the September equinox, and in the southwest quadrant from the September equinox to the March equinox. Sunsets occur almost exactly due west on the equinoxes for all viewers on Earth. Exact calculations of the azimuths of sunset on other dates are complex, but they can be estimated with reasonable accuracy by using the analemma.
As sunrise and sunset are calculated from the leading and trailing edges of the Sun, respectively, and not the center, the duration of a daytime is slightly longer than nighttime (by about 10 minutes, as seen from temperate latitudes). Further, because the light from the Sun is refracted as it passes through the Earth's atmosphere, the Sun is still visible after it is geometrically below the horizon. Refraction also affects the apparent shape of the Sun when it is very close to the horizon. It makes things appear higher in the sky than they really are. Light from the bottom edge of the Sun's disk is refracted more than light from the top, since refraction increases as the angle of elevation decreases. This raises the apparent position of the bottom edge more than the top, reducing the apparent height of the solar disk. Its width is unaltered, so the disk appears wider than it is high. (In reality, the Sun is almost exactly spherical.) The Sun also appears larger on the horizon, an optical illusion, similar to the moon illusion.
Locations within the Arctic and Antarctic Circles experience periods where the Sun does not rise or set for 24 hours or more, known as polar day and polar night. These phenomena occur due to Earth’s axial tilt, causing continuous sunlight or darkness at certain times of the year.
Location on the horizon
Approximate locations of sunset on the horizon (azimuth) as described above can be found in Refs.
The figure on the right is calculated using the solar geometry routine as follows:
For a given latitude and a given date, calculate the declination of the Sun using longitude and solar noon time as inputs to the routine;
Calculate the sunset hour angle using the sunset equation;
Calculate the sunset time, which is the solar noon time plus the sunset hour angle in degree divided by 15;
Use the sunset time as input to the solar geometry routine to get the solar azimuth angle at sunset.
An interesting feature in the figure on the right is apparent hemispheric symmetry in regions where daily sunrise and sunset actually occur. This symmetry becomes clear if the hemispheric relation in sunrise equation is applied to the x- and y-components of the solar vector presented in Ref. Solar geometry routines that model solar azimuth angles at sunset permit the calculation using latitude, date, and time parameters to be done precisely.
Colors
As a ray of white sunlight travels through the atmosphere to an observer, some of the colors are scattered out of the beam by air molecules and airborne particles, changing the final color of the beam the viewer sees.
Because the shorter wavelength components, such as blue and green, scatter more strongly, these colors are preferentially removed from the beam. At sunrise and sunset, when the path through the atmosphere is longer, the blue and green components are removed almost completely, leaving the longer wavelength orange and red hues we see at those times. The remaining reddened sunlight can then be scattered by cloud droplets and other relatively large particles to light up the horizon red and orange. The removal of the shorter wavelengths of light is due to Rayleigh scattering by air molecules and particles much smaller than the wavelength of visible light (less than 50 nm in diameter). The scattering by cloud droplets and other particles with diameters comparable to or larger than the sunlight's wavelengths (> 600 nm) is due to Mie scattering and is not strongly wavelength-dependent. Mie scattering is responsible for the light scattered by clouds, and also for the daytime halo of white light around the Sun (forward scattering of white light).
Sunset colors are typically more brilliant than sunrise colors, because the evening air contains more particles than morning air. Sometimes just before sunrise or after sunset a green flash can be seen.
Ash from volcanic eruptions, trapped within the troposphere, tends to mute sunset and sunrise colors, while volcanic ejecta that is instead lofted into the stratosphere (as thin clouds of tiny sulfuric acid droplets), can yield beautiful post-sunset colors called afterglows and pre-sunrise glows. A number of eruptions, including those of Mount Pinatubo in 1991 and Krakatoa in 1883, have produced sufficiently high stratus clouds containing sulfuric acid to yield remarkable sunset afterglows (and pre-sunrise glows) around the world. The high-altitude clouds serve to reflect strongly reddened sunlight still striking the stratosphere after sunset, down to the surface.
Some of the most varied colors at sunset can be found in the opposite or eastern sky after the Sun has set during twilight. Depending on weather conditions and the types of clouds present, these colors have a wide spectrum, and can produce unusual results.
Names of compass points
In some languages, points of the compass bear names etymologically derived from words for sunrise and sunset. The English words "orient" and "occident", meaning "east" and "west", respectively, are descended from Latin words meaning "sunrise" and "sunset". The word "levant", related e.g. to French "(se) lever" meaning "lift" or "rise" (and also to English "elevate"), is also used to describe the east. In Polish, the word for east wschód (vskhud), is derived from the morpheme "ws" – meaning "up", and "chód" – signifying "move" (from the verb chodzić – meaning "walk, move"), due to the act of the Sun coming up from behind the horizon. The Polish word for west, zachód (zakhud), is similar but with the word "za" at the start, meaning "behind", from the act of the Sun going behind the horizon. In Russian, the word for west, запад (zapad), is derived from the words за – meaning "behind", and пад – signifying "fall" (from the verb падать – padat'), due to the act of the Sun falling behind the horizon. In Hebrew, the word for east is 'מזרח', which derives from the word for rising, and the word for west is 'מערב', which derives from the word for setting.
Historical view
The 16th-century astronomer Nicolaus Copernicus was the first to present to the world a detailed and eventually widely accepted mathematical model supporting the premise that the Earth is moving and the Sun actually stays still, despite the impression from our point of view of a moving Sun.
Planets
Sunsets on other planets appear different because of differences in the distance of the planet from the Sun and non-existent or differing atmospheric compositions.
Mars
On Mars, the setting Sun appears about two-thirds the size it does from Earth, due to the greater distance between Mars and the Sun. The colors are typically hues of blue, but some Martian sunsets last significantly longer and appear far redder than is typical on Earth.
The colors of the Martian sunset differ from those on Earth. Mars has a thin atmosphere, lacking oxygen and nitrogen, so the light scattering is not dominated by a Rayleigh Scattering process. Instead, the air is full of red dust, blown into the atmosphere by high winds, so its sky color is mainly determined by a Mie Scattering process, resulting in more blue hues than an Earth sunset. One study also reported that Martian dust high in the atmosphere can reflect sunlight up to two hours after the Sun has set, casting a diffuse glow across the surface of Mars.
| Physical sciences | Celestial mechanics | Astronomy |
190996 | https://en.wikipedia.org/wiki/Lignin | Lignin | Lignin is a class of complex organic polymers that form key structural materials in the support tissues of most plants. Lignins are particularly important in the formation of cell walls, especially in wood and bark, because they lend rigidity and do not rot easily. Chemically, lignins are polymers made by cross-linking phenolic precursors.
History
Lignin was first mentioned in 1813 by the Swiss botanist A. P. de Candolle, who described it as a fibrous, tasteless material, insoluble in water and alcohol but soluble in weak alkaline solutions, and which can be precipitated from solution using acid. He named the substance "lignine", which is derived from the Latin word lignum, meaning wood. It is one of the most abundant organic polymers on Earth, exceeded only by cellulose and chitin. Lignin constitutes 30% of terrestrial non-fossil organic carbon on Earth, and 20 to 35% of the dry mass of wood.
Lignin is present in red algae, which suggest that the common ancestor of plants and red algae may have been pre-adapted to synthesize lignin. This finding also suggests that the original function of lignin may have been structural as it plays this role in the red alga Calliarthron, where it supports joints between calcified segments.
Composition and structure
The composition of lignin varies from species to species. An example of composition from an aspen sample is 63.4% carbon, 5.9% hydrogen, 0.7% ash (mineral components), and 30% oxygen (by difference), corresponding approximately to the formula (C31H34O11)n.
Lignin is a collection of highly heterogeneous polymers derived from a handful of precursor lignols. Heterogeneity arises from the diversity and degree of crosslinking between these lignols. The lignols that crosslink are of three main types, all derived from phenylpropane: coniferyl alcohol (3-methoxy-4-hydroxyphenylpropane; its radical, G, is sometimes called guaiacyl), sinapyl alcohol (3,5-dimethoxy-4-hydroxyphenylpropane; its radical, S, is sometimes called syringyl), and paracoumaryl alcohol (4-hydroxyphenylpropane; its radical, H, is sometimes called 4-hydroxyphenyl).
The relative amounts of the precursor "monomers" (lignols or monolignols) vary according to the plant source. Lignins are typically classified according to their syringyl/guaiacyl (S/G) ratio. Lignin from gymnosperms is derived from the coniferyl alcohol, which gives rise to G upon pyrolysis. In angiosperms some of the coniferyl alcohol is converted to S. Thus, lignin in angiosperms has both G and S components.
Lignin's molecular masses exceed 10,000 u. It is hydrophobic as it is rich in aromatic subunits. The degree of polymerisation is difficult to measure, since the material is heterogeneous. Different types of lignin have been described depending on the means of isolation.
Many grasses have mostly G, while some palms have mainly S. All lignins contain small amounts of incomplete or modified monolignols, and other monomers are prominent in non-woody plants.
Biological function
Lignin fills the spaces in the cell wall between cellulose, hemicellulose, and pectin components, especially in vascular and support tissues: xylem tracheids, vessel elements and sclereid cells.
Lignin plays a crucial part in conducting water and aqueous nutrients in plant stems. The polysaccharide components of plant cell walls are highly hydrophilic and thus permeable to water, whereas lignin is more hydrophobic. The crosslinking of polysaccharides by lignin is an obstacle for water absorption to the cell wall. Thus, lignin makes it possible for the plant's vascular tissue to conduct water efficiently. Lignin is present in all vascular plants, but not in bryophytes, supporting the idea that the original function of lignin was restricted to water transport.
It is covalently linked to hemicellulose and therefore cross-links different plant polysaccharides, conferring mechanical strength to the cell wall and by extension the plant as a whole. Its most commonly noted function is the support through strengthening of wood (mainly composed of xylem cells and lignified sclerenchyma fibres) in vascular plants.
Finally, lignin also confers disease resistance by accumulating at the site of pathogen infiltration, making the plant cell less accessible to cell wall degradation.
Economic significance
Global commercial production of lignin is a consequence of papermaking. In 1988, more than 220 million tons of paper were produced worldwide. Much of this paper was delignified; lignin comprises about 1/3 of the mass of lignocellulose, the precursor to paper. Lignin is an impediment to papermaking as it is colored, it yellows in air, and its presence weakens the paper. Once separated from the cellulose, it is burned as fuel. Only a fraction is used in a wide range of low volume applications where the form but not the quality is important.
Mechanical, or high-yield pulp, which is used to make newsprint, still contains most of the lignin originally present in the wood. This lignin is responsible for newsprint's yellowing with age. High quality paper requires the removal of lignin from the pulp. These delignification processes are core technologies of the papermaking industry as well as the source of significant environmental concerns.
In sulfite pulping, lignin is removed from wood pulp as lignosulfonates, for which many applications have been proposed. They are used as dispersants, humectants, emulsion stabilizers, and sequestrants (water treatment). Lignosulfonate was also the first family of water reducers or superplasticizers to be added in the 1930s as admixture to fresh concrete in order to decrease the water-to-cement (w/c) ratio, the main parameter controlling the concrete porosity, and thus its mechanical strength, its diffusivity and its hydraulic conductivity, all parameters essential for its durability. It has application in environmentally sustainable dust suppression agent for roads. Also, lignin can be used in making biodegradable plastic along with cellulose as an alternative to hydrocarbon-made plastics if lignin extraction is achieved through a more environmentally viable process than generic plastic manufacturing.
Lignin removed by the kraft process is usually burned for its fuel value, providing energy to power the paper mill. Two commercial processes exist to remove lignin from black liquor for higher value uses: LignoBoost (Sweden) and LignoForce (Canada). Higher quality lignin presents the potential to become a renewable source of aromatic compounds for the chemical industry, with an addressable market of more than $130bn.
Given that it is the most prevalent biopolymer after cellulose, lignin has been investigated as a feedstock for biofuel production and can become a crucial plant extract in the development of a new class of biofuels.
Biosynthesis
Lignin biosynthesis begins in the cytosol with the synthesis of glycosylated monolignols from the amino acid phenylalanine. These first reactions are shared with the phenylpropanoid pathway. The attached glucose renders them water-soluble and less toxic. Once transported through the cell membrane to the apoplast, the glucose is removed, and the polymerisation commences. Much about its anabolism is not understood even after more than a century of study.
The polymerisation step, that is a radical-radical coupling, is catalysed by oxidative enzymes. Both peroxidase and laccase enzymes are present in the plant cell walls, and it is not known whether one or both of these groups participates in the polymerisation. Low molecular weight oxidants might also be involved. The oxidative enzyme catalyses the formation of monolignol radicals. These radicals are often said to undergo uncatalyzed coupling to form the lignin polymer. An alternative theory invokes an unspecified biological control.
Biodegradation
In contrast to other bio-polymers (e.g. proteins, DNA, and even cellulose), lignin resists degradation. It is immune to both acid- and base-catalyzed hydrolysis. The degradability varies with species and plant tissue type. For example, syringyl (S) lignin is more susceptible to degradation by fungal decay as it has fewer aryl-aryl bonds and a lower redox potential than guaiacyl units. Because it is cross-linked with the other cell wall components, lignin minimizes the accessibility of cellulose and hemicellulose to microbial enzymes, leading to a reduced digestibility of biomass.
Some ligninolytic enzymes include heme peroxidases such as lignin peroxidases, manganese peroxidases, versatile peroxidases, and dye-decolourizing peroxidases as well as copper-based laccases. Lignin peroxidases oxidize non-phenolic lignin, whereas manganese peroxidases only oxidize the phenolic structures. Dye-decolorizing peroxidases, or DyPs, exhibit catalytic activity on a wide range of lignin model compounds, but their in vivo substrate is unknown. In general, laccases oxidize phenolic substrates but some fungal laccases have been shown to oxidize non-phenolic substrates in the presence of synthetic redox mediators.
Lignin degradation by fungi
Well-studied ligninolytic enzymes are found in Phanerochaete chrysosporium and other white rot fungi. Some white rot fungi, such as Ceriporiopsis subvermispora, can degrade the lignin in lignocellulose, but others lack this ability. Most fungal lignin degradation involves secreted peroxidases. Many fungal laccases are also secreted, which facilitate degradation of phenolic lignin-derived compounds, although several intracellular fungal laccases have also been described. An important aspect of fungal lignin degradation is the activity of accessory enzymes to produce the H2O2 required for the function of lignin peroxidase and other heme peroxidases.
Lignin degradation by bacteria
Bacteria lack most of the enzymes employed by fungi to degrade lignin, and lignin derivatives (aliphatic acids, furans, and solubilized phenolics) inhibit the growth of bacteria. Yet, bacterial degradation can be quite extensive, especially in aquatic systems such as lakes, rivers, and streams, where inputs of terrestrial material (e.g. leaf litter) can enter waterways. The ligninolytic activity of bacteria has not been studied extensively even though it was first described in 1930. Many bacterial DyPs have been characterized. Bacteria do not express any of the plant-type peroxidases (lignin peroxidase, Mn peroxidase, or versatile peroxidases), but three of the four classes of DyP are only found in bacteria. In contrast to fungi, most bacterial enzymes involved in lignin degradation are intracellular, including two classes of DyP and most bacterial laccases.
In the environment, lignin can be degraded either biotically via bacteria or abiotically via photochemical alteration, and oftentimes the latter assists in the former. In addition to the presence or absence of light, several of environmental factors affect the biodegradability of lignin, including bacterial community composition, mineral associations, and redox state.
In shipworms, the lignin it ingests is digested by "Alteromonas-like sub-group" bacteria symbionts in the typhlosole sub-organ of its cecum.
Pyrolysis
Pyrolysis of lignin during the combustion of wood or charcoal production yields a range of products, of which the most characteristic ones are methoxy-substituted phenols. Of those, the most important are guaiacol and syringol and their derivatives. Their presence can be used to trace a smoke source to a wood fire. In cooking, lignin in the form of hardwood is an important source of these two compounds, which impart the characteristic aroma and taste to smoked foods such as barbecue. The main flavor compounds of smoked ham are guaiacol, and its 4-, 5-, and 6-methyl derivatives as well as 2,6-dimethylphenol. These compounds are produced by thermal breakdown of lignin in the wood used in the smokehouse.
Chemical analysis
The conventional method for lignin quantitation in the pulp industry is the Klason lignin and acid-soluble lignin test, which is standardized procedures. The cellulose is digested thermally in the presence of acid. The residue is termed Klason lignin. Acid-soluble lignin (ASL) is quantified by the intensity of its Ultraviolet spectroscopy. The carbohydrate composition may be also analyzed from the Klason liquors, although there may be sugar breakdown products (furfural and 5-hydroxymethylfurfural).
A solution of hydrochloric acid and phloroglucinol is used for the detection of lignin (Wiesner test). A brilliant red color develops, owing to the presence of coniferaldehyde groups in the lignin.
Thioglycolysis is an analytical technique for lignin quantitation. Lignin structure can also be studied by computational simulation.
Thermochemolysis (chemical break down of a substance under vacuum and at high temperature) with tetramethylammonium hydroxide (TMAH) or cupric oxide has also been used to characterize lignins. The ratio of syringyl lignol (S) to vanillyl lignol (V) and cinnamyl lignol (C) to vanillyl lignol (V) is variable based on plant type and can therefore be used to trace plant sources in aquatic systems (woody vs. non-woody and angiosperm vs. gymnosperm). Ratios of carboxylic acid (Ad) to aldehyde (Al) forms of the lignols (Ad/Al) reveal diagenetic information, with higher ratios indicating a more highly degraded material. Increases in the (Ad/Al) value indicate an oxidative cleavage reaction has occurred on the alkyl lignin side chain which has been shown to be a step in the decay of wood by many white-rot and some soft rot fungi.
Lignin and its models have been well examined by 1H and 13C NMR spectroscopy. Owing to the structural complexity of lignins, the spectra are poorly resolved and quantitation is challenging.
| Physical sciences | Polyphenols | Chemistry |
191061 | https://en.wikipedia.org/wiki/Dawn | Dawn | Dawn is the time that marks the beginning of twilight before sunrise. It is recognized by the appearance of indirect sunlight being scattered in Earth's atmosphere, when the centre of the Sun's disc has reached 18° below the observer's horizon. This morning twilight period will last until sunrise (when the Sun's upper limb breaks the horizon), when direct sunlight outshines the diffused light.
Etymology
"Dawn" derives from the Old English verb , "to become day".
Types of dawn
Dawn begins with the first sight of lightness in the morning, and continues until the Sun breaks the horizon. The morning twilight is divided in three phases, which are determined by the angular distance of the centre of the Sun (degrees below the horizon) in the morning. These are astronomical, nautical and civil twilight.
Astronomical dawn
Astronomical dawn begins when the center of the Sun is 18 degrees below the horizon in the morning. Astronomical twilight follows instantly until the center of the Sun is 12 degrees below the horizon. At this point, a very small portion of the Sun's rays illuminate the sky and the fainter stars begin to disappear. Astronomical dawn is often indistinguishable from night, especially in areas with light pollution. Astronomical dawn marks the beginning of astronomical twilight, which lasts until nautical dawn.
Nautical dawn
Nautical twilight begins when there is enough light for sailors to distinguish the horizon at sea, but the sky is still too dark to perform outdoor activities. It begins when the center of the Sun is 12 degrees below the horizon in the morning. Nautical dawn marks the start of nautical twilight, which lasts until civil dawn.
Civil dawn
Civil dawn begins when there is enough light for most objects to be distinguishable, so that some outdoor activities can commence. It occurs when the center of the Sun is 6 degrees below the horizon in the morning.
When the sky is clear, it is blue colored, and if there are clouds or haze, bronze, orange and yellow colors are seen. Some bright stars and planets such as Venus and Jupiter are still visible to the naked eye at civil dawn. This moment marks the start of civil twilight, which lasts until sunrise.
Effects of latitude
The duration of the morning twilight (i.e. between astronomical dawn and sunrise) varies greatly depending on the observer's latitude: from a little over 70 minutes at the Equator, to many hours in the polar regions.
The Equator
The period of twilight is shortest at the Equator, where the equinox Sun rises due east and sets due west, at a right angle to the horizon. Each stage of twilight (civil, nautical, and astronomical) lasts only 24 minutes. From anywhere on Earth, the twilight period is shortest around the equinoxes and longest on the solstices.
Polar regions
Daytime becomes longer as the summer solstice approaches, while nighttime gets longer as the winter solstice approaches. This can have a potential impact on the times and durations of dawn and dusk. This effect is more pronounced closer to the poles, where the Sun rises at the vernal equinox and sets at the autumn equinox, with a long period of twilight, lasting for a few weeks.
The polar circle (at north or south) is defined as the lowest latitude at which the Sun does not set at the summer solstice. Therefore, the angular radius of the polar circle is equal to the angle between Earth's equatorial plane and the ecliptic plane. This period of time with no sunset lengthens closer to the pole.
Near the summer solstice, latitudes higher than about 54°34′ get no darker than nautical twilight; the "darkness of the night" varies greatly at these latitudes.
At latitudes higher than about 60°34′, summer nights get no darker than civil twilight. This period of "bright nights" is longer at higher latitudes.
Example
Around the summer solstice, Glasgow, Scotland at 55°51′ N, and Copenhagen, Denmark at 55°40′ N, get a few hours of "night feeling". Oslo, Norway at 59°56′ N, and Stockholm, Sweden at 59°19′ N, seem very bright when the Sun is below the horizon. When the Sun gets 9.0 to 9.5 degrees below the horizon (at summer solstice this is at latitudes 57°30′–57°00′), the zenith gets dark even on cloud-free nights (if there is no full moon), and the brightest stars are clearly visible in a large majority of the sky.
Mythology and religion
In Islam, Zodiacal Light (or "false dawn") is referred to as False Morning (, Persian ) and Astronomical dawn is called () or True Morning (, Persian ), and it is the time of first prayer of the day, and the beginning of the daily fast during Ramadan.
Many Indo-European mythologies have a dawn goddess, separate from the male Solar deity, her name deriving from PIE *h2ausos-, derivations of which include Greek Eos, Roman Aurora and Indian Ushas. Also related is Lithuanian Aušrinė, and possibly a Germanic *Austrōn- (whence the term Easter).
In Sioux mythology, Anpao is an entity with two faces.
The Hindu dawn deity Ushas is female, whereas Surya, the Sun, and Aruṇa, the Sun's charioteer, are male. Ushas is one of the most prominent Rigvedic deities. The time of dawn is also referred to as the brahmamuhurta (Brahma is the god of creation and muhurta is a Hindu time of the day), and is considered an ideal time to perform spiritual activities, including meditation and yoga. In some parts of India, both Usha and Pratyusha (dusk) are worshipped along with the Sun during the festival of Chhath.
Jesus in the Bible is often symbolized by dawn in the morning, also when Jesus rose on the third day it happened during the morning. Prime is the fixed time of prayer of the traditional Divine Office (Canonical Hours) in Christian liturgy, said at the first hour of daylight. Associated with Jesus, in Christianity, Christian burials take place in the direction of dawn.
In Judaism, the question of how to calculate dawn (Hebrew Alos/ HaShachar, or Alos/) is posed by the Talmud, as it has many ramifications for Jewish law (such as the possible start time for certain daytime commandments, like prayer). The simple reading of the Talmud is that dawn takes place 72 minutes before sunrise. Others, including the Vilna Gaon, have the understanding that the Talmud's timeframe for dawn was referring specifically to an equinox day in Mesopotamia, and is therefore teaching that dawn should be calculated daily as commencing when the Sun is 16.1 degrees below the horizon. The longstanding practice among most Sephardic Jews is to follow the first opinion, while many Ashkenazi Jews follow the latter view.
In art
In literature
Homer uses the stock epithet "rosy-fingered Dawn" frequently in The Iliad and The Odyssey
An aubade (Occitan Alba, German Tagelied) is a song about lovers having to separate at daybreak
(Dawn is a friend to the Muse), in by Barthold Nihus
The Dawn, volume 1 on Jean-Christophe written by Romain Rolland
Dawn, a novel written by Henry Rider Haggard, published in 1884
"Dawn", a poem written by Rupert Brooke published in The Collected Poems of Rupert Brooke
"Dawn", a poem written by Richard Aldington
"Dawn", a poem written by Emily Dickinson
"Dawn", a poem written by Francis Ledwidge
"Dawn", a poem written by John Masefield
"Dawn", a poem written by William Carlos Williams
I Greet the Dawn: Poems, a book of poetry written by Paul Laurence Dunbar, published January 1, 1978, by Atheneum Books
"Dawn", a four-line poem from Lyrics of Lowly Life, a book of poetry written by Paul Laurence Dunbar, originally published in 1896. This poem was published again in The Complete Poems of Paul Laurence Dunbar, the 1913 collection of his work--
An angel, robed in spotless white,
Bent down and kissed the sleeping Night.
Night woke to blush; the sprite was gone.
Men saw the blush and called it Dawn.
-Dawn by Paul Laurence Dunbar
| Physical sciences | Celestial mechanics | Astronomy |
191064 | https://en.wikipedia.org/wiki/Dusk | Dusk | Dusk occurs at the darkest stage of twilight, or at the very end of astronomical twilight after sunset and just before nightfall. At predusk, during early to intermediate stages of twilight, enough light in the sky under clear conditions may occur to read outdoors without artificial illumination; however, at the end of civil twilight (when Earth rotates to a point at which the center of the Sun's disk is 6° below the local horizon), such lighting is required to read outside. The term dusk usually refers to astronomical dusk, or the darkest part of twilight before night begins.
Technical definitions
The time of dusk is the moment at the very end of astronomical twilight, just before the minimum brightness of the night sky sets in, or may be thought of as the darkest part of evening twilight. However, technically, the three stages of dusk are as follows:
At civil dusk, the center of the Sun's disc goes 6° below the horizon in the evening. It marks the end of civil twilight, which begins at sunset. At this time objects are still distinguishable and depending on weather conditions some stars and planets may start to become visible to the naked eye. The sky has many colors at this time, such as orange and red. Beyond this point artificial light may be needed to carry out outdoor activities, depending on atmospheric conditions and location.
At nautical dusk, the Sun moves to 12° below the horizon in the evening. It marks the end of nautical twilight, which begins at civil dusk. At this time, objects are less distinguishable, and stars and planets appear to brighten.
At astronomical dusk, the Sun's position is 18° below the horizon in the evening. It marks the end of astronomical twilight, which begins at nautical dusk. After this time the Sun no longer illuminates the sky, and thus no longer interferes with astronomical observations.
Gallery
| Physical sciences | Celestial mechanics | Astronomy |
191094 | https://en.wikipedia.org/wiki/Attractor | Attractor | In the mathematical field of dynamical systems, an attractor is a set of states toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed.
In finite-dimensional systems, the evolving variable may be represented algebraically as an n-dimensional vector. The attractor is a region in n-dimensional space. In physical systems, the n dimensions may be, for example, two or three positional coordinates for each of one or more physical entities; in economic systems, they may be separate variables such as the inflation rate and the unemployment rate.
If the evolving variable is two- or three-dimensional, the attractor of the dynamic process can be represented geometrically in two or three dimensions, (as for example in the three-dimensional case depicted to the right). An attractor can be a point, a finite set of points, a curve, a manifold, or even a complicated set with a fractal structure known as a strange attractor (see strange attractor below). If the variable is a scalar, the attractor is a subset of the real number line. Describing the attractors of chaotic dynamical systems has been one of the achievements of chaos theory.
A trajectory of the dynamical system in the attractor does not have to satisfy any special constraints except for remaining on the attractor, forward in time. The trajectory may be periodic or chaotic. If a set of points is periodic or chaotic, but the flow in the neighborhood is away from the set, the set is not an attractor, but instead is called a repeller (or repellor).
Motivation of attractors
A dynamical system is generally described by one or more differential or difference equations. The equations of a given dynamical system specify its behavior over any given short period of time. To determine the system's behavior for a longer period, it is often necessary to integrate the equations, either through analytical means or through iteration, often with the aid of computers.
Dynamical systems in the physical world tend to arise from dissipative systems: if it were not for some driving force, the motion would cease. (Dissipation may come from internal friction, thermodynamic losses, or loss of material, among many causes.) The dissipation and the driving force tend to balance, killing off initial transients and settle the system into its typical behavior. The subset of the phase space of the dynamical system corresponding to the typical behavior is the attractor, also known as the attracting section or attractee.
Invariant sets and limit sets are similar to the attractor concept. An invariant set is a set that evolves to itself under the dynamics. Attractors may contain invariant sets. A limit set is a set of points such that there exists some initial state that ends up arbitrarily close to the limit set (i.e. to each point of the set) as time goes to infinity. Attractors are limit sets, but not all limit sets are attractors: It is possible to have some points of a system converge to a limit set, but different points when perturbed slightly off the limit set may get knocked off and never return to the vicinity of the limit set.
For example, the damped pendulum has two invariant points: the point of minimum height and the point of maximum height. The point is also a limit set, as trajectories converge to it; the point is not a limit set. Because of the dissipation due to air resistance, the point is also an attractor. If there was no dissipation, would not be an attractor. Aristotle believed that objects moved only as long as they were pushed, which is an early formulation of a dissipative attractor.
Some attractors are known to be chaotic (see strange attractor), in which case the evolution of any two distinct points of the attractor result in exponentially diverging trajectories, which complicates prediction when even the smallest noise is present in the system.
Mathematical definition
Let represent time and let be a function which specifies the dynamics of the system. That is, if is a point in an -dimensional phase space, representing the initial state of the system, then and, for a positive value of , is the result of the evolution of this state after units of time. For example, if the system describes the evolution of a free particle in one dimension then the phase space is the plane with coordinates , where is the position of the particle, is its velocity, , and the evolution is given by
An attractor is a subset of the phase space characterized by the following three conditions:
is forward invariant under : if is an element of then so is , for all .
There exists a neighborhood of , called the basin of attraction for and denoted , which consists of all points that "enter" in the limit . More formally, is the set of all points in the phase space with the following property:
For any open neighborhood of , there is a positive constant such that for all real .
There is no proper (non-empty) subset of having the first two properties.
Since the basin of attraction contains an open set containing , every point that is sufficiently close to is attracted to . The definition of an attractor uses a metric on the phase space, but the resulting notion usually depends only on the topology of the phase space. In the case of , the Euclidean norm is typically used.
Many other definitions of attractor occur in the literature. For example, some authors require that an attractor have positive measure (preventing a point from being an attractor), others relax the requirement that be a neighborhood.
Types of attractors
Attractors are portions or subsets of the phase space of a dynamical system. Until the 1960s, attractors were thought of as being simple geometric subsets of the phase space, like points, lines, surfaces, and simple regions of three-dimensional space. More complex attractors that cannot be categorized as simple geometric subsets, such as topologically wild sets, were known of at the time but were thought to be fragile anomalies. Stephen Smale was able to show that his horseshoe map was robust and that its attractor had the structure of a Cantor set.
Two simple attractors are a fixed point and the limit cycle. Attractors can take on many other geometric shapes (phase space subsets). But when these sets (or the motions within them) cannot be easily described as simple combinations (e.g. intersection and union) of fundamental geometric objects (e.g. lines, surfaces, spheres, toroids, manifolds), then the attractor is called a strange attractor.
Fixed point
A fixed point of a function or transformation is a point that is mapped to itself by the function or transformation. If we regard the evolution of a dynamical system as a series of transformations, then there may or may not be a point which remains fixed under each transformation. The final state that a dynamical system evolves towards corresponds to an attracting fixed point of the evolution function for that system, such as the center bottom position of a damped pendulum, the level and flat water line of sloshing water in a glass, or the bottom center of a bowl containing a rolling marble. But the fixed point(s) of a dynamic system is not necessarily an attractor of the system. For example, if the bowl containing a rolling marble was inverted and the marble was balanced on top of the bowl, the center bottom (now top) of the bowl is a fixed state, but not an attractor. This is equivalent to the difference between stable and unstable equilibria. In the case of a marble on top of an inverted bowl (a hill), that point at the top of the bowl (hill) is a fixed point (equilibrium), but not an attractor (unstable equilibrium).
In addition, physical dynamic systems with at least one fixed point invariably have multiple fixed points and attractors due to the reality of dynamics in the physical world, including the nonlinear dynamics of stiction, friction, surface roughness, deformation (both elastic and plasticity), and even quantum mechanics. In the case of a marble on top of an inverted bowl, even if the bowl seems perfectly hemispherical, and the marble's spherical shape, are both much more complex surfaces when examined under a microscope, and their shapes change or deform during contact. Any physical surface can be seen to have a rough terrain of multiple peaks, valleys, saddle points, ridges, ravines, and plains. There are many points in this surface terrain (and the dynamic system of a similarly rough marble rolling around on this microscopic terrain) that are considered stationary or fixed points, some of which are categorized as attractors.
Finite number of points
In a discrete-time system, an attractor can take the form of a finite number of points that are visited in sequence. Each of these points is called a periodic point. This is illustrated by the logistic map, which depending on its specific parameter value can have an attractor consisting of 1 point, 2 points, 2n points, 3 points, 3×2n points, 4 points, 5 points, or any given positive integer number of points.
Limit cycle
A limit cycle is a periodic orbit of a continuous dynamical system that is isolated. It concerns a cyclic attractor. Examples include the swings of a pendulum clock, and the heartbeat while resting. The limit cycle of an ideal pendulum is not an example of a limit cycle attractor because its orbits are not isolated: in the phase space of the ideal pendulum, near any point of a periodic orbit there is another point that belongs to a different periodic orbit, so the former orbit is not attracting. For a physical pendulum under friction, the resting state will be a fixed-point attractor. The difference with the clock pendulum is that there, energy is injected by the escapement mechanism to maintain the cycle.
Limit torus
There may be more than one frequency in the periodic trajectory of the system through the state of a limit cycle. For example, in physics, one frequency may dictate the rate at which a planet orbits a star while a second frequency describes the oscillations in the distance between the two bodies. If two of these frequencies form an irrational fraction (i.e. they are incommensurate), the trajectory is no longer closed, and the limit cycle becomes a limit torus. This kind of attractor is called an -torus if there are incommensurate frequencies. For example, here is a 2-torus:
A time series corresponding to this attractor is a quasiperiodic series: A discretely sampled sum of periodic functions (not necessarily sine waves) with incommensurate frequencies. Such a time series does not have a strict periodicity, but its power spectrum still consists only of sharp lines.
Strange attractor
An attractor is called strange if it has a fractal structure, that is if it has non-integer Hausdorff dimension. This is often the case when the dynamics on it are chaotic, but strange nonchaotic attractors also exist. If a strange attractor is chaotic, exhibiting sensitive dependence on initial conditions, then any two arbitrarily close alternative initial points on the attractor, after any of various numbers of iterations, will lead to points that are arbitrarily far apart (subject to the confines of the attractor), and after any of various other numbers of iterations will lead to points that are arbitrarily close together. Thus a dynamic system with a chaotic attractor is locally unstable yet globally stable: once some sequences have entered the attractor, nearby points diverge from one another but never depart from the attractor.
The term strange attractor was coined by David Ruelle and Floris Takens to describe the attractor resulting from a series of bifurcations of a system describing fluid flow. Strange attractors are often differentiable in a few directions, but some are like a Cantor dust, and therefore not differentiable. Strange attractors may also be found in the presence of noise, where they may be shown to support invariant random probability measures of Sinai–Ruelle–Bowen type.
Examples of strange attractors include the double-scroll attractor, Hénon attractor, Rössler attractor, and Lorenz attractor.
Attractors characterize the evolution of a system
The parameters of a dynamic equation evolve as the equation is iterated, and the specific values may depend on the starting parameters. An example is the well-studied logistic map, , whose basins of attraction for various values of the parameter are shown in the figure. If , all starting values of will rapidly lead to function values that go to negative infinity; starting values of will also go to negative infinity. But for the values rapidly converge to , i.e. at this value of , a single value of is an attractor for the function's behaviour. For other values of , more than one value of may be visited: if is 3.2, starting values of will lead to function values that alternate between and . At some values of , the attractor is a single point (a "fixed point"), at other values of two values of are visited in turn (a period-doubling bifurcation), or, as a result of further doubling, any number values of ; at yet other values of , any given number of values of are visited in turn; finally, for some values of , an infinitude of points are visited. Thus one and the same dynamic equation can have various types of attractors, depending on its parameters.
Basins of attraction
An attractor's basin of attraction is the region of the phase space, over which iterations are defined, such that any point (any initial condition) in that region will asymptotically be iterated into the attractor. For a stable linear system, every point in the phase space is in the basin of attraction. However, in nonlinear systems, some points may map directly or asymptotically to infinity, while other points may lie in a different basin of attraction and map asymptotically into a different attractor; other initial conditions may be in or map directly into a non-attracting point or cycle.
Linear equation or system
An univariate linear homogeneous difference equation diverges to infinity if from all initial points except 0; there is no attractor and therefore no basin of attraction. But if all points on the number line map asymptotically (or directly in the case of 0) to 0; 0 is the attractor, and the entire number line is the basin of attraction.
Likewise, a linear matrix difference equation in a dynamic vector , of the homogeneous form in terms of square matrix will have all elements of the dynamic vector diverge to infinity if the largest eigenvalues of is greater than 1 in absolute value; there is no attractor and no basin of attraction. But if the largest eigenvalue is less than 1 in magnitude, all initial vectors will asymptotically converge to the zero vector, which is the attractor; the entire -dimensional space of potential initial vectors is the basin of attraction.
Similar features apply to linear differential equations. The scalar equation causes all initial values of except zero to diverge to infinity if but to converge to an attractor at the value 0 if , making the entire number line the basin of attraction for 0. And the matrix system gives divergence from all initial points except the vector of zeroes if any eigenvalue of the matrix is positive; but if all the eigenvalues are negative the vector of zeroes is an attractor whose basin of attraction is the entire phase space.
Nonlinear equation or system
Equations or systems that are nonlinear can give rise to a richer variety of behavior than can linear systems. One example is Newton's method of iterating to a root of a nonlinear expression. If the expression has more than one real root, some starting points for the iterative algorithm will lead to one of the roots asymptotically, and other starting points will lead to another. The basins of attraction for the expression's roots are generally not simple—it is not simply that the points nearest one root all map there, giving a basin of attraction consisting of nearby points. The basins of attraction can be infinite in number and arbitrarily small. For example, for the function , the following initial conditions are in successive basins of attraction:
2.35287527 converges to 4;
2.35284172 converges to −3;
2.35283735 converges to 4;
2.352836327 converges to −3;
2.352836323 converges to 1.
Newton's method can also be applied to complex functions to find their roots. Each root has a basin of attraction in the complex plane; these basins can be mapped as in the image shown. As can be seen, the combined basin of attraction for a particular root can have many disconnected regions. For many complex functions, the boundaries of the basins of attraction are fractals.
Partial differential equations
Parabolic partial differential equations may have finite-dimensional attractors. The diffusive part of the equation damps higher frequencies and in some cases leads to a global attractor. The Ginzburg–Landau, the Kuramoto–Sivashinsky, and the two-dimensional, forced Navier–Stokes equations are all known to have global attractors of finite dimension.
For the three-dimensional, incompressible Navier–Stokes equation with periodic boundary conditions, if it has a global attractor, then this attractor will be of finite dimensions.
| Mathematics | Dynamical systems | null |
191101 | https://en.wikipedia.org/wiki/Phase%20space | Phase space | The phase space of a physical system is the set of all possible physical states of the system when described by a given parameterization. Each possible state corresponds uniquely to a point in the phase space. For mechanical systems, the phase space usually consists of all possible values of the position and momentum parameters. It is the direct product of direct space and reciprocal space. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Josiah Willard Gibbs.
Principles
In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system or allowed combination of values of the system's parameters, a point is included in the multidimensional space. The system's evolving state over time traces a path (a phase-space trajectory for the system) through the high-dimensional space. The phase-space trajectory represents the set of states compatible with starting from one particular initial condition, located in the full phase space that represents the set of states compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta (6 dimensions for an idealized monatomic gas), and for more complex molecular systems additional dimensions are required to describe vibrational modes of the molecular bonds, as well as spin around 3 axes. Phase spaces are easier to use when analyzing the behavior of mechanical systems restricted to motion around and along various axes of rotation or translation e.g. in robotics, like analyzing the range of motion of a robotic arm or determining the optimal path to achieve a particular position/momentum result.
Conjugate momenta
In classical mechanics, any choice of generalized coordinates qi for the position (i.e. coordinates on configuration space) defines conjugate generalized momenta pi, which together define co-ordinates on phase space. More abstractly, in classical mechanics phase space is the cotangent bundle of configuration space, and in this interpretation the procedure above expresses that a choice of local coordinates on configuration space induces a choice of natural local Darboux coordinates for the standard symplectic structure on a cotangent space.
Statistical ensembles in phase space
The motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of points in such systems obeys Liouville's theorem, and so can be taken as constant. Within the context of a model system in classical mechanics, the phase-space coordinates of the system at any given time are composed of all of the system's dynamic variables. Because of this, it is possible to calculate the state of the system at any given time in the future or the past, through integration of Hamilton's or Lagrange's equations of motion.
In low dimensions
For simple systems, there may be as few as one or two degrees of freedom. One degree of freedom occurs when one has an autonomous ordinary differential equation in a single variable, with the resulting one-dimensional system being called a phase line, and the qualitative behaviour of the system being immediately visible from the phase line. The simplest non-trivial examples are the exponential growth model/decay (one unstable/stable equilibrium) and the logistic growth model (two equilibria, one stable, one unstable).
The phase space of a two-dimensional system is called a phase plane, which occurs in classical mechanics for a single particle moving in one dimension, and where the two variables are position and velocity. In this case, a sketch of the phase portrait may give qualitative information about the dynamics of the system, such as the limit cycle of the Van der Pol oscillator shown in the diagram.
Here the horizontal axis gives the position, and vertical axis the velocity. As the system evolves, its state follows one of the lines (trajectories) on the phase diagram.
Related concepts
Phase plot
A plot of position and momentum variables as a function of time is sometimes called a phase plot or a phase diagram. However the latter expression, "phase diagram", is more usually reserved in the physical sciences for a diagram showing the various regions of stability of the thermodynamic phases of a chemical system, which consists of pressure, temperature, and composition.
Phase portrait
Phase integral
In classical statistical mechanics (continuous energies) the concept of phase space provides a classical analog to the partition function (sum over states) known as the phase integral. Instead of summing the Boltzmann factor over discretely spaced energy states (defined by appropriate integer quantum numbers for each degree of freedom), one may integrate over continuous phase space. Such integration essentially consists of two parts: integration of the momentum component of all degrees of freedom (momentum space) and integration of the position component of all degrees of freedom (configuration space). Once the phase integral is known, it may be related to the classical partition function by multiplication of a normalization constant representing the number of quantum energy states per unit phase space. This normalization constant is simply the inverse of the Planck constant raised to a power equal to the number of degrees of freedom for the system.
Applications
Chaos theory
Classic examples of phase diagrams from chaos theory are:
the Lorenz attractor
population growth (i.e. logistic map)
parameter plane of complex quadratic polynomials with Mandelbrot set.
Quantum mechanics
In quantum mechanics, the coordinates p and q of phase space normally become Hermitian operators in a Hilbert space.
But they may alternatively retain their classical interpretation, provided functions of them compose in novel algebraic ways (through Groenewold's 1946 star product). This is consistent with the uncertainty principle of quantum mechanics.
Every quantum mechanical observable corresponds to a unique function or distribution on phase space, and conversely, as specified by Hermann Weyl (1927) and supplemented by John von Neumann (1931); Eugene Wigner (1932); and, in a grand synthesis, by H. J. Groenewold (1946).
With J. E. Moyal (1949), these completed the foundations of the phase-space formulation of quantum mechanics, a complete and logically autonomous reformulation of quantum mechanics. (Its modern abstractions include deformation quantization and geometric quantization.)
Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables, with the Wigner quasi-probability distribution effectively serving as a measure.
Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with deformation parameter ħ/S, where S is the action of the relevant process. (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter v/c; or the deformation of Newtonian gravity into general relativity, with deformation parameter Schwarzschild radius/characteristic dimension.)
Classical expressions, observables, and operations (such as Poisson brackets) are modified by ħ-dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle.
Thermodynamics and statistical mechanics
In thermodynamics and statistical mechanics contexts, the term "phase space" has two meanings: for one, it is used in the same sense as in classical mechanics. If a thermodynamic system consists of N particles, then a point in the 6N-dimensional phase space describes the dynamic state of every particle in that system, as each particle is associated with 3 position variables and 3 momentum variables. In this sense, as long as the particles are distinguishable, a point in phase space is said to be a microstate of the system. (For indistinguishable particles a microstate consists of a set of N! points, corresponding to all possible exchanges of the N particles.) N is typically on the order of the Avogadro number, thus describing the system at a microscopic level is often impractical. This leads to the use of phase space in a different sense.
The phase space can also refer to the space that is parameterized by the macroscopic states of the system, such as pressure, temperature, etc. For instance, one may view the pressure–volume diagram or temperature–entropy diagram as describing part of this phase space. A point in this phase space is correspondingly called a macrostate. There may easily be more than one microstate with the same macrostate. For example, for a fixed temperature, the system could have many dynamic configurations at the microscopic level. When used in this sense, a phase is a region of phase space where the system in question is in, for example, the liquid phase, or solid phase, etc.
Since there are many more microstates than macrostates, the phase space in the first sense is usually a manifold of much larger dimensions than in the second sense. Clearly, many more parameters are required to register every detail of the system down to the molecular or atomic scale than to simply specify, say, the temperature or the pressure of the system.
Optics
Phase space is extensively used in nonimaging optics, the branch of optics devoted to illumination. It is also an important concept in Hamiltonian optics.
Medicine
In medicine and bioengineering, the phase space method is used to visualize multidimensional physiological responses.
| Mathematics | Dynamical systems | null |
191123 | https://en.wikipedia.org/wiki/Planck%27s%20law | Planck's law | In physics, Planck's law (also Planck radiation law) describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature , when there is no net flow of matter or energy between the body and its environment.
At the end of the 19th century, physicists were unable to explain why the observed spectrum of black-body radiation, which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, German physicist Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. While Planck originally regarded the hypothesis of dividing energy into increments as a mathematical artifice, introduced merely to get the correct answer, other physicists including Albert Einstein built on his work, and Planck's insight is now recognized to be of fundamental importance to quantum theory.
The law
Every physical body spontaneously and continuously emits electromagnetic radiation and the spectral radiance of a body, , describes the spectral emissive power per unit area, per unit solid angle and per unit frequency for particular radiation frequencies. The relationship given by Planck's radiation law, given below, shows that with increasing temperature, the total radiated energy of a body increases and the peak of the emitted spectrum shifts to shorter wavelengths. According to Planck's distribution law, the spectral energy density (energy per unit volume per unit frequency) at given temperature is given by:alternatively, the law can be expressed for the spectral radiance of a body for frequency at absolute temperature
given as:where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. The cgs units of spectral radiance are . The terms and are related to each other by a factor of since is independent of direction and radiation travels at speed .
The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. In addition, the law may be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation.
In the limit of low frequencies (i.e. long wavelengths), Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation.
Max Planck developed the law in 1900 with only empirically determined constants, and later showed that, expressed as an energy distribution, it is the unique stable distribution for radiation in thermodynamic equilibrium. As an energy distribution, it is one of a family of thermal equilibrium distributions which include the Bose–Einstein distribution, the Fermi–Dirac distribution and the Maxwell–Boltzmann distribution.
Black-body radiation
A black-body is an idealised object which absorbs and emits all radiation frequencies. Near thermodynamic equilibrium, the emitted radiation is closely described by Planck's law and because of its dependence on temperature, Planck radiation is said to be thermal radiation, such that the higher the temperature of a body the more radiation it emits at every wavelength.
Planck radiation has a maximum intensity at a wavelength that depends on the temperature of the body. For example, at room temperature (~), a body emits thermal radiation that is mostly infrared and invisible. At higher temperatures the amount of infrared radiation increases and can be felt as heat, and more visible radiation is emitted so the body glows visibly red. At higher temperatures, the body is bright yellow or blue-white and emits significant amounts of short wavelength radiation, including ultraviolet and even x-rays. The surface of the Sun (~) emits large amounts of both infrared and ultraviolet radiation; its emission is peaked in the visible spectrum. This shift due to temperature is called Wien's displacement law.
Planck radiation is the greatest amount of radiation that any body at thermal equilibrium can emit from its surface, whatever its chemical composition or surface structure. The passage of radiation across an interface between media can be characterized by the emissivity of the interface (the ratio of the actual radiance to the theoretical Planck radiance), usually denoted by the symbol . It is in general dependent on chemical composition and physical structure, on temperature, on the wavelength, on the angle of passage, and on the polarization. The emissivity of a natural interface is always between and 1.
A body that interfaces with another medium which both has and absorbs all the radiation incident upon it is said to be a black body. The surface of a black body can be modelled by a small hole in the wall of a large enclosure which is maintained at a uniform temperature with opaque walls that, at every wavelength, are not perfectly reflective. At equilibrium, the radiation inside this enclosure is described by Planck's law, as is the radiation leaving the small hole.
Just as the Maxwell–Boltzmann distribution is the unique maximum entropy energy distribution for a gas of material particles at thermal equilibrium, so is Planck's distribution for a gas of photons. By contrast to a material gas where the masses and number of particles play a role, the spectral radiance, pressure and energy density of a photon gas at thermal equilibrium are entirely determined by the temperature.
If the photon gas is not Planckian, the second law of thermodynamics guarantees that interactions (between photons and other particles or even, at sufficiently high temperatures, between the photons themselves) will cause the photon energy distribution to change and approach the Planck distribution. In such an approach to thermodynamic equilibrium, photons are created or annihilated in the right numbers and with the right energies to fill the cavity with a Planck distribution until they reach the equilibrium temperature. It is as if the gas is a mixture of sub-gases, one for every band of wavelengths, and each sub-gas eventually attains the common temperature.
The quantity is the spectral radiance as a function of temperature and frequency. It has units of W·m−2·sr−1·Hz−1 in the SI system. An infinitesimal amount of power is radiated in the direction described by the angle from the surface normal from infinitesimal surface area into infinitesimal solid angle in an infinitesimal frequency band of width centered on frequency . The total power radiated into any solid angle is the integral of over those three quantities, and is given by the Stefan–Boltzmann law. The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator.
Different forms
Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The various forms of the law for spectral radiance are summarized in the table below. Forms on the left are most often encountered in experimental fields, while those on the right are most often encountered in theoretical fields.
In the fractional bandwidth formulation, and the integration is with respect to
Planck's law can also be written in terms of the spectral energy density () by multiplying by :
These distributions represent the spectral radiance of blackbodies—the power emitted from the emitting surface, per unit projected area of emitting surface, per unit solid angle, per spectral unit (frequency, wavelength, wavenumber or their angular equivalents, or fractional frequency or wavelength). Since the radiance is isotropic (i.e. independent of direction), the power emitted at an angle to the normal is proportional to the projected area, and therefore to the cosine of that angle as per Lambert's cosine law, and is unpolarized.
Correspondence between spectral variable forms
Different spectral variables require different corresponding forms of expression of the law. In general, one may not convert between the various forms of Planck's law simply by substituting one variable for another, because this would not take into account that the different forms have different units. Wavelength and frequency units are reciprocal.
Corresponding forms of expression are related because they express one and the same physical fact: for a particular physical spectral increment, a corresponding particular physical energy increment is radiated.
This is so whether it is expressed in terms of an increment of frequency, , or, correspondingly, of wavelength, , or of fractional bandwidth, or . Introduction of a minus sign can indicate that an increment of frequency corresponds with decrement of wavelength.
In order to convert the corresponding forms so that they express the same quantity in the same units we multiply by the spectral increment. Then, for a particular spectral increment, the particular physical energy increment may be written
which leads to
Also, , so that . Substitution gives the correspondence between the frequency and wavelength forms, with their different dimensions and units.
Consequently,
Evidently, the location of the peak of the spectral distribution for Planck's law depends on the choice of spectral variable. Nevertheless, in a manner of speaking, this formula means that the shape of the spectral distribution is independent of temperature, according to Wien's displacement law, as detailed below in § Properties §§ Percentiles.
The fractional bandwidth form is related to the other forms by
.
First and second radiation constants
In the above variants of Planck's law, the wavelength and wavenumber variants use the terms and which comprise physical constants only. Consequently, these terms can be considered as physical constants themselves, and are therefore referred to as the first radiation constant and the second radiation constant with
and
Using the radiation constants, the wavelength variant of Planck's law can be simplified to
and the wavenumber variant can be simplified correspondingly.
is used here instead of because it is the SI symbol for spectral radiance. The in refers to that. This reference is necessary because Planck's law can be reformulated to give spectral radiant exitance rather than spectral radiance , in which case replaces , with
so that Planck's law for spectral radiant exitance can be written as
As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of ; see for details.
Physics
Planck's law describes the unique and characteristic spectral distribution for electromagnetic radiation in thermodynamic equilibrium, when there is no net flow of matter or energy. Its physics is most easily understood by considering the radiation in a cavity with rigid opaque walls. Motion of the walls can affect the radiation. If the walls are not opaque, then the thermodynamic equilibrium is not isolated. It is of interest to explain how the thermodynamic equilibrium is attained. There are two main cases: (a) when the approach to thermodynamic equilibrium is in the presence of matter, when the walls of the cavity are imperfectly reflective for every wavelength or when the walls are perfectly reflective while the cavity contains a small black body (this was the main case considered by Planck); or (b) when the approach to equilibrium is in the absence of matter, when the walls are perfectly reflective for all wavelengths and the cavity contains no matter. For matter not enclosed in such a cavity, thermal radiation can be approximately explained by appropriate use of Planck's law.
Classical physics led, via the equipartition theorem, to the ultraviolet catastrophe, a prediction that the total blackbody radiation intensity was infinite. If supplemented by the classically unjustifiable assumption that for some reason the radiation is finite, classical thermodynamics provides an account of some aspects of the Planck distribution, such as the Stefan–Boltzmann law, and the Wien displacement law. For the case of the presence of matter, quantum mechanics provides a good account, as found below in the section headed Einstein coefficients. This was the case considered by Einstein, and is nowadays used for quantum optics. For the case of the absence of matter, quantum field theory is necessary, because non-relativistic quantum mechanics with fixed particle numbers does not provide a sufficient account.
Photons
Quantum theoretical explanation of Planck's law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium. Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution. For a photon gas in thermodynamic equilibrium, the internal energy density is entirely determined by the temperature; moreover, the pressure is entirely determined by the internal energy density. This is unlike the case of thermodynamic equilibrium for material gases, for which the internal energy is determined not only by the temperature, but also, independently, by the respective numbers of the different molecules, and independently again, by the specific characteristics of the different molecules. For different material gases at given temperature, the pressure and internal energy density can vary independently, because different molecules can carry independently different excitation energies.
Planck's law arises as a limit of the Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium. In the case of massless bosons such as photons and gluons, the chemical potential is zero and the Bose–Einstein distribution reduces to the Planck distribution. There is another fundamental equilibrium energy distribution: the Fermi–Dirac distribution, which describes fermions, such as electrons, in thermal equilibrium. The two distributions differ because multiple bosons can occupy the same quantum state, while multiple fermions cannot. At low densities, the number of available quantum states per particle is large, and this difference becomes irrelevant. In the low density limit, the Bose–Einstein and the Fermi–Dirac distribution each reduce to the Maxwell–Boltzmann distribution.
Kirchhoff's law of thermal radiation
Kirchhoff's law of thermal radiation is a succinct and brief account of a complicated physical situation. The following is an introductory sketch of that situation, and is very far from being a rigorous physical argument. The purpose here is only to summarize the main physical factors in the situation, and the main conclusions.
Spectral dependence of thermal radiation
There is a difference between conductive heat transfer and radiative heat transfer. Radiative heat transfer can be filtered to pass only a definite band of radiative frequencies.
It is generally known that the hotter a body becomes, the more heat it radiates at every frequency.
In a cavity in an opaque body with rigid walls that are not perfectly reflective at any frequency, in thermodynamic equilibrium, there is only one temperature, and it must be shared in common by the radiation of every frequency.
One may imagine two such cavities, each in its own isolated radiative and thermodynamic equilibrium. One may imagine an optical device that allows radiative heat transfer between the two cavities, filtered to pass only a definite band of radiative frequencies. If the values of the spectral radiances of the radiations in the cavities differ in that frequency band, heat may be expected to pass from the hotter to the colder. One might propose to use such a filtered transfer of heat in such a band to drive a heat engine. If the two bodies are at the same temperature, the second law of thermodynamics does not allow the heat engine to work. It may be inferred that for a temperature common to the two bodies, the values of the spectral radiances in the pass-band must also be common. This must hold for every frequency band. This became clear to Balfour Stewart and later to Kirchhoff. Balfour Stewart found experimentally that of all surfaces, one of lamp-black emitted the greatest amount of thermal radiation for every quality of radiation, judged by various filters.
Thinking theoretically, Kirchhoff went a little further and pointed out that this implied that the spectral radiance, as a function of radiative frequency, of any such cavity in thermodynamic equilibrium must be a unique universal function of temperature. He postulated an ideal black body that interfaced with its surrounds in just such a way as to absorb all the radiation that falls on it. By the Helmholtz reciprocity principle, radiation from the interior of such a body would pass unimpeded directly to its surroundings without reflection at the interface. In thermodynamic equilibrium, the thermal radiation emitted from such a body would have that unique universal spectral radiance as a function of temperature. This insight is the root of Kirchhoff's law of thermal radiation.
Relation between absorptivity and emissivity
One may imagine a small homogeneous spherical material body labeled at a temperature , lying in a radiation field within a large cavity with walls of material labeled at a temperature . The body emits its own thermal radiation. At a particular frequency , the radiation emitted from a particular cross-section through the centre of in one sense in a direction normal to that cross-section may be denoted , characteristically for the material of . At that frequency , the radiative power from the walls into that cross-section in the opposite sense in that direction may be denoted , for the wall temperature . For the material of , defining the absorptivity as the fraction of that incident radiation absorbed by , that incident energy is absorbed at a rate .
The rate of accumulation of energy in one sense into the cross-section of the body can then be expressed
Kirchhoff's seminal insight, mentioned just above, was that, at thermodynamic equilibrium at temperature , there exists a unique universal radiative distribution, nowadays denoted , that is independent of the chemical characteristics of the materials and , that leads to a very valuable understanding of the radiative exchange equilibrium of any body at all, as follows.
When there is thermodynamic equilibrium at temperature , the cavity radiation from the walls has that unique universal value, so that . Further, one may define the emissivity of the material of the body just so that at thermodynamic equilibrium at temperature , one has .
When thermal equilibrium prevails at temperature , the rate of accumulation of energy vanishes so that . It follows that in thermodynamic equilibrium, when ,
Kirchhoff pointed out that it follows that in thermodynamic equilibrium, when ,
Introducing the special notation for the absorptivity of material at thermodynamic equilibrium at temperature (justified by a discovery of Einstein, as indicated below), one further has the equality
at thermodynamic equilibrium.
The equality of absorptivity and emissivity here demonstrated is specific for thermodynamic equilibrium at temperature and is in general not to be expected to hold when conditions of thermodynamic equilibrium do not hold. The emissivity and absorptivity are each separately properties of the molecules of the material but they depend differently upon the distributions of states of molecular excitation on the occasion, because of a phenomenon known as "stimulated emission", that was discovered by Einstein. On occasions when the material is in thermodynamic equilibrium or in a state known as local thermodynamic equilibrium, the emissivity and absorptivity become equal. Very strong incident radiation or other factors can disrupt thermodynamic equilibrium or local thermodynamic equilibrium. Local thermodynamic equilibrium in a gas means that molecular collisions far outweigh light emission and absorption in determining the distributions of states of molecular excitation.
Kirchhoff pointed out that he did not know the precise character of , but he thought it important that it should be found out. Four decades after Kirchhoff's insight of the general principles of its existence and character, Planck's contribution was to determine the precise mathematical expression of that equilibrium distribution .
Black body
In physics, one considers an ideal black body, here labeled , defined as one that completely absorbs all of the electromagnetic radiation falling upon it at every frequency (hence the term "black"). According to Kirchhoff's law of thermal radiation, this entails that, for every frequency , at thermodynamic equilibrium at temperature , one has , so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. No physical body can emit thermal radiation that exceeds that of a black body, since if it were in equilibrium with a radiation field, it would be emitting more energy than was incident upon it.
Though perfectly black materials do not exist, in practice a black surface can be accurately approximated. As to its material interior, a body of condensed matter, liquid, solid, or plasma, with a definite interface with its surroundings, is completely black to radiation if it is completely opaque. That means that it absorbs all of the radiation that penetrates the interface of the body with its surroundings, and enters the body. This is not too difficult to achieve in practice. On the other hand, a perfectly black interface is not found in nature. A perfectly black interface reflects no radiation, but transmits all that falls on it, from either side. The best practical way to make an effectively black interface is to simulate an 'interface' by a small hole in the wall of a large cavity in a completely opaque rigid body of material that does not reflect perfectly at any frequency, with its walls at a controlled temperature. Beyond these requirements, the component material of the walls is unrestricted. Radiation entering the hole has almost no possibility of escaping the cavity without being absorbed by multiple impacts with its walls.
Lambert's cosine law
As explained by Planck, a radiating body has an interior consisting of matter, and an interface with its contiguous neighbouring material medium, which is usually the medium from within which the radiation from the surface of the body is observed. The interface is not composed of physical matter but is a theoretical conception, a mathematical two-dimensional surface, a joint property of the two contiguous media, strictly speaking belonging to neither separately. Such an interface can neither absorb nor emit, because it is not composed of physical matter; but it is the site of reflection and transmission of radiation, because it is a surface of discontinuity of optical properties. The reflection and transmission of radiation at the interface obey the Stokes–Helmholtz reciprocity principle.
At any point in the interior of a black body located inside a cavity in thermodynamic equilibrium at temperature the radiation is homogeneous, isotropic and unpolarized. A black body absorbs all and reflects none of the electromagnetic radiation incident upon it. According to the Helmholtz reciprocity principle, radiation from the interior of a black body is not reflected at its surface, but is fully transmitted to its exterior. Because of the isotropy of the radiation in the body's interior, the spectral radiance of radiation transmitted from its interior to its exterior through its surface is independent of direction.
This is expressed by saying that radiation from the surface of a black body in thermodynamic equilibrium obeys Lambert's cosine law. This means that the spectral flux from a given infinitesimal element of area of the actual emitting surface of the black body, detected from a given direction that makes an angle with the normal to the actual emitting surface at , into an element of solid angle of detection centred on the direction indicated by , in an element of frequency bandwidth , can be represented as
where denotes the flux, per unit area per unit frequency per unit solid angle, that area would show if it were measured in its normal direction .
The factor is present because the area to which the spectral radiance refers directly is the projection, of the actual emitting surface area, onto a plane perpendicular to the direction indicated by . This is the reason for the name cosine law.
Taking into account the independence of direction of the spectral radiance of radiation from the surface of a black body in thermodynamic equilibrium, one has and so
Thus Lambert's cosine law expresses the independence of direction of the spectral radiance of the surface of a black body in thermodynamic equilibrium.
Stefan–Boltzmann law
The total power emitted per unit area at the surface of a black body () may be found by integrating the black body spectral flux found from Lambert's law over all frequencies, and over the solid angles corresponding to a hemisphere () above the surface.
The infinitesimal solid angle can be expressed in spherical polar coordinates:
So that:
where is known as the Stefan–Boltzmann constant.
Radiative transfer
The equation of radiative transfer describes the way in which radiation is affected as it travels through a material medium. For the special case in which the material medium is in thermodynamic equilibrium in the neighborhood of a point in the medium, Planck's law is of special importance.
For simplicity, we can consider the linear steady state, without scattering. The equation of radiative transfer states that for a beam of light going through a small distance , energy is conserved: The change in the (spectral) radiance of that beam () is equal to the amount removed by the material medium plus the amount gained from the material medium. If the radiation field is in equilibrium with the material medium, these two contributions will be equal. The material medium will have a certain emission coefficient and absorption coefficient.
The absorption coefficient is the fractional change in the intensity of the light beam as it travels the distance , and has units of length−1. It is composed of two parts, the decrease due to absorption and the increase due to stimulated emission. Stimulated emission is emission by the material body which is caused by and is proportional to the incoming radiation. It is included in the absorption term because, like absorption, it is proportional to the intensity of the incoming radiation. Since the amount of absorption will generally vary linearly as the density of the material, we may define a "mass absorption coefficient" which is a property of the material itself. The change in intensity of a light beam due to absorption as it traverses a small distance will then be
The "mass emission coefficient" is equal to the radiance per unit volume of a small volume element divided by its mass (since, as for the mass absorption coefficient, the emission is proportional to the emitting mass) and has units of power⋅solid angle−1⋅frequency−1⋅density−1. Like the mass absorption coefficient, it too is a property of the material itself. The change in a light beam as it traverses a small distance will then be
The equation of radiative transfer will then be the sum of these two contributions:
If the radiation field is in equilibrium with the material medium, then the radiation will be homogeneous (independent of position) so that and:
which is another statement of Kirchhoff's law, relating two material properties of the medium, and which yields the radiative transfer equation at a point around which the medium is in thermodynamic equilibrium:
Einstein coefficients
The principle of detailed balance states that, at thermodynamic equilibrium, each elementary process is equilibrated by its reverse process.
In 1916, Albert Einstein applied this principle on an atomic level to the case of an atom radiating and absorbing radiation due to transitions between two particular energy levels, giving a deeper insight into the equation of radiative transfer and Kirchhoff's law for this type of radiation. If level 1 is the lower energy level with energy , and level 2 is the upper energy level with energy , then the frequency of the radiation radiated or absorbed will be determined by Bohr's frequency condition:
If and are the number densities of the atom in states 1 and 2 respectively, then the rate of change of these densities in time will be due to three processes:
Spontaneous emission
Stimulated emission
Photo-absorption
where is the spectral energy density of the radiation field. The three parameters , and , known as the Einstein coefficients, are associated with the photon frequency produced by the transition between two energy levels (states). As a result, each line in a spectrum has its own set of associated coefficients. When the atoms and the radiation field are in equilibrium, the radiance will be given by Planck's law and, by the principle of detailed balance, the sum of these rates must be zero:
Since the atoms are also in equilibrium, the populations of the two levels are related by the Boltzmann factor:
where and are the multiplicities of the respective energy levels. Combining the above two equations with the requirement that they be valid at any temperature yields two relationships between the Einstein coefficients:
so that knowledge of one coefficient will yield the other two.
For the case of isotropic absorption and emission, the emission coefficient () and absorption coefficient () defined in the radiative transfer section above, can be expressed in terms of the Einstein coefficients. The relationships between the Einstein coefficients will yield the expression of Kirchhoff's law expressed in the Radiative transfer section above, namely that
These coefficients apply to both atoms and molecules.
Properties
Peaks
The distributions , , and peak at a photon energy ofwhere is the Lambert W function and is Euler's number.
However, the distribution peaks at a different energyThe reason for this is that, as mentioned above, one cannot go from (for example) to simply by substituting by . In addition, one must also multiply by , which shifts the peak of the distribution to higher energies. These peaks are the mode energy of a photon, when binned using equal-size bins of frequency or wavelength, respectively. Dividing () by these energy expression gives the wavelength of the peak.
The spectral radiance at these peaks is given by:
with andwith
Meanwhile, the average energy of a photon from a blackbody iswhere is the Riemann zeta function.
Approximations
In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law
or
The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe. In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation:
or
Percentiles
Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. It is therefore possible to list the percentile points of the total radiation as well as the peaks for wavelength and frequency, in a form which gives the wavelength when divided by temperature . The second column of the following table lists the corresponding values of , that is, those values of for which the wavelength is micrometers at the radiance percentile point given by the corresponding entry in the first column.
That is, 0.01% of the radiation is at a wavelength below μm, 20% below , etc. The wavelength and frequency peaks are in bold and occur at 25.0% and 64.6% respectively. The 41.8% point is the wavelength-frequency-neutral peak (i.e. the peak in power per unit change in logarithm of wavelength or frequency). These are the points at which the respective Planck-law functions , and , respectively, divided by attain their maxima. The much smaller gap in ratio of wavelengths between 0.1% and 0.01% (1110 is 22% more than 910) than between 99.9% and 99.99% (113374 is 120% more than 51613) reflects the exponential decay of energy at short wavelengths (left end) and polynomial decay at long.
Which peak to use depends on the application. The conventional choice is the wavelength peak at 25.0% given by Wien's displacement law in its weak form. For some purposes the median or 50% point dividing the total radiation into two-halves may be more suitable. The latter is closer to the frequency peak than to the wavelength peak because the radiance drops exponentially at short wavelengths and only polynomially at long. The neutral peak occurs at a shorter wavelength than the median for the same reason.
Comparison to solar spectrum
Solar radiation can be compared to black-body radiation at about 5778 K (but see graph). The table on the right shows how the radiation of a black body at this temperature is partitioned, and also how sunlight is partitioned for comparison. Also for comparison a planet modeled as a black body is shown, radiating at a nominal 288 K (15 °C) as a representative value of the Earth's highly variable temperature. Its wavelengths are more than twenty times that of the Sun, tabulated in the third column in micrometers (thousands of nanometers).
That is, only 1% of the Sun's radiation is at wavelengths shorter than 296 nm, and only 1% at longer than 3728 nm. Expressed in micrometers this puts 98% of the Sun's radiation in the range from 0.296 to 3.728 μm. The corresponding 98% of energy radiated from a 288 K planet is from 5.03 to 79.5 μm, well above the range of solar radiation (or below if expressed in terms of frequencies instead of wavelengths ).
A consequence of this more-than-order-of-magnitude difference in wavelength between solar and planetary radiation is that filters designed to pass one and block the other are easy to construct. For example, windows fabricated of ordinary glass or transparent plastic pass at least 80% of the incoming 5778 K solar radiation, which is below 1.2 μm in wavelength, while blocking over 99% of the outgoing 288 K thermal radiation from 5 μm upwards, wavelengths at which most kinds of glass and plastic of construction-grade thickness are effectively opaque.
The Sun's radiation is that arriving at the top of the atmosphere (TOA). As can be read from the table, radiation below 400 nm, or ultraviolet, is about 8%, while that above 700 nm, or infrared, starts at about the 48% point and so accounts for 52% of the total. Hence only 40% of the TOA insolation is visible to the human eye. The atmosphere shifts these percentages substantially in favor of visible light as it absorbs most of the ultraviolet and significant amounts of infrared.
Derivations
Photon gas
Consider a cube of side with conducting walls filled with electromagnetic radiation in thermal equilibrium at temperature . If there is a small hole in one of the walls, the radiation emitted from the hole will be characteristic of a perfect black body. We will first calculate the spectral energy density within the cavity and then determine the spectral radiance of the emitted radiation.
At the walls of the cube, the parallel component of the electric field and the orthogonal component of the magnetic field must vanish. Analogous to the wave function of a particle in a box, one finds that the fields are superpositions of periodic functions. The three wavelengths , , and , in the three directions orthogonal to the walls can be:where the are positive integers. For each set of integers there are two linearly independent solutions (known as modes). The two modes for each set of these correspond to the two polarization states of the photon which has a spin of 1. According to quantum theory, the total energy of a mode is given by:
The number can be interpreted as the number of photons in the mode. For the energy of the mode is not zero. This vacuum energy of the electromagnetic field is responsible for the Casimir effect. In the following we will calculate the internal energy of the box at absolute temperature .
According to statistical mechanics, the equilibrium probability distribution over the energy levels of a particular mode is given by:where we use the reciprocal temperatureThe denominator , is the partition function of a single mode. It makes properly normalized, and can be evaluated aswith
being the energy of a single photon. The average energy in a mode can be obtained from the partition function:This formula, apart from the first vacuum energy term, is a special case of the general formula for particles obeying Bose–Einstein statistics. Since there is no restriction on the total number of photons, the chemical potential is zero.
If we measure the energy relative to the ground state, the total energy in the box follows by summing over all allowed single photon states. This can be done exactly in the thermodynamic limit as approaches infinity. In this limit, becomes continuous and we can then integrate over this parameter. To calculate the energy in the box in this way, we need to evaluate how many photon states there are in a given energy range. If we write the total number of single photon states with energies between and as , where is the density of states (which is evaluated below), then the total energy is given by
To calculate the density of states we rewrite equation () as follows:where is the norm of the vector .
For every vector with integer components larger than or equal to zero, there are two photon states. This means that the number of photon states in a certain region of -space is twice the volume of that region. An energy range of corresponds to shell of thickness in -space. Because the components of have to be positive, this shell spans an octant of a sphere. The number of photon states , in an energy range , is thus given by:Inserting this in Eq. () and dividing by volume gives the total energy densitywhere the frequency-dependent spectral energy density is given bySince the radiation is the same in all directions, and propagates at the speed of light, the spectral radiance of radiation exiting the small hole iswhich yields the Planck's lawOther forms of the law can be obtained by change of variables in the total energy integral. The above derivation is based on .
Dipole approximation and Einstein Coefficients
For the non-degenerate case, A and B coefficients can be calculated using dipole approximation in time dependent perturbation theory in quantum mechanics. Calculation of A also requires second quantization since semi-classical theory cannot explain spontaneous emission which does not go to zero as perturbing field goes to zero. The transition rates hence calculated are (in SI units):
Note that the rate of transition formula depends on dipole moment operator. For higher order approximations, it involves quadrupole moment and other similar terms. The A and B coefficients (which correspond to angular frequency energy distribution) are hence:
where and A and B coefficients satisfy the given ratios for non degenerate case:
and .
Another useful ratio is that from maxwell distribution which says that the number of particles in an energy level is proportional to the exponent of . Mathematically:
where and are number of occupied energy levels of and respectively, where . Then, using:
Solving for for equilibrium condition , and using the derived ratios, we get Planck's Law:
.
History
Balfour Stewart
In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power."
Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium.
Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. According to historian D. M. Siegel: "He was not a practitioner of the more sophisticated techniques of nineteenth-century mathematical physics; he did not even make use of the functional notation in dealing with spectral distributions." He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He concluded that his experiments showed that, in the interior of an enclosure in thermal equilibrium, the radiant heat, reflected and emitted combined, leaving any part of the surface, regardless of its substance, was the same as would have left that same portion of the surface if it had been composed of lamp-black. He did not mention the possibility of ideally perfectly reflective walls; in particular he noted that highly polished real physical metals absorb very slightly.
Gustav Kirchhoff
In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber.
Kirchhoff then went on to consider bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature .
Here is used a notation different from Kirchhoff's. Here, the emitting power denotes a dimensioned quantity, the total radiation emitted by a body labeled by index at temperature . The total absorption ratio of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because is dimensionless. Also here the wavelength-specific emitting power of the body at temperature is denoted by and the wavelength-specific absorption ratio by . Again, the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power.
In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. His theoretical proof was and still is considered by some writers to be invalid. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio has one and the same value for all bodies, that is for all values of index . In this report there was no mention of black bodies.
In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation, of unselected quality, emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio , has one and the same value common to all bodies, that is, for every value of the material index . Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid.
But more importantly, it relied on a new theoretical postulate of "perfectly black bodies", which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction.
Kirchhoff's proof considered an arbitrary non-ideal body labeled as well as various perfect black bodies labeled . It required that the bodies be kept in a cavity in thermal equilibrium at temperature . His proof intended to show that the ratio was independent of the nature of the non-ideal body, however partly transparent or partly reflective it was.
His proof first argued that for wavelength and at temperature , at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power , with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorption ratio of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorption ratio is again just , with the dimensions of power. Kirchhoff considered, successively, thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature . He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio was equal to , which may now be denoted , a continuous function, dependent only on at fixed temperature , and an increasing function of at fixed wavelength , at low temperatures vanishing for visible but not for longer wavelengths, with positive values for visible wavelengths at higher temperatures, which does not depend on the nature of the arbitrary non-ideal body. (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing.)
Thus Kirchhoff's law of thermal radiation can be stated: For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature , for every wavelength , the ratio of emissive power to absorptive ratio has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by . (For our notation , Kirchhoff's original notation was simply .)
Kirchhoff announced that the determination of the function was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. That function has occasionally been called 'Kirchhoff's (emission, universal) function', though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with "Carnot's principle", which is a form of the second law.
According to Helge Kragh, "Quantum theory owes its origin to the study of thermal radiation, in particular to the "blackbody" radiation that Robert Kirchhoff had first defined in 1859–1860."
Empirical and theoretical ingredients for the scientific induction of Planck's law
In 1860, Kirchhoff predicted experimental difficulties for the empirical determination of the function that described the dependence of the black-body spectrum as a function only of temperature and wavelength. And so it turned out. It took some forty years of development of improved methods of measurement of electromagnetic radiation to get a reliable result.
In 1865, John Tyndall described radiation from electrically heated filaments and from carbon arcs as visible and invisible. Tyndall spectrally decomposed the radiation by use of a rock salt prism, which passed heat as well as visible rays, and measured the radiation intensity by means of a thermopile.
In 1880, André-Prosper-Paul Crova published a diagram of the three-dimensional appearance of the graph of the strength of thermal radiation as a function of wavelength and temperature. He determined the spectral variable by use of prisms. He analyzed the surface through what he called "isothermal" curves, sections for a single temperature, with a spectral variable on the abscissa and a power variable on the ordinate. He put smooth curves through his experimental data points. They had one peak at a spectral value characteristic for the temperature, and fell either side of it towards the horizontal axis. Such spectral sections are widely shown even today.
In a series of papers from 1881 to 1886, Langley reported measurements of the spectrum of heat radiation, using diffraction gratings and prisms, and the most sensitive detectors that he could make. He reported that there was a peak intensity that increased with temperature, that the shape of the spectrum was not symmetrical about the peak, that there was a strong fall-off of intensity when the wavelength was shorter than an approximate cut-off value for each temperature, that the approximate cut-off wavelength decreased with increasing temperature, and that the wavelength of the peak intensity decreased with temperature, so that the intensity increased strongly with temperature for short wavelengths that were longer than the approximate cut-off for the temperature.
Having read Langley, in 1888, Russian physicist V.A. Michelson published a consideration of the idea that the unknown Kirchhoff radiation function could be explained physically and stated mathematically in terms of "complete irregularity of the vibrations of ... atoms". At this time, Planck was not studying radiation closely, and believed in neither atoms nor statistical physics. Michelson produced a formula for the spectrum for temperature:
where denotes specific radiative intensity at wavelength and temperature , and where and are empirical constants.
In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. Their design has been used largely unchanged for radiation measurements to the present day. It was a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides.
The importance of the Lummer and Kurlbaum cavity radiation source was that it was an experimentally accessible source of black-body radiation, as distinct from radiation from a simply exposed incandescent solid body, which had been the nearest available experimental approximation to black-body radiation over a suitable range of temperatures. The simply exposed incandescent solid bodies, that had been used before, emitted radiation with departures from the black-body spectrum that made it impossible to find the true black-body spectrum from experiments.
Planck's views before the empirical facts led him to find his eventual law
Planck first turned his attention to the problem of black-body radiation in 1897.
Theoretical and empirical progress enabled Lummer and Pringsheim to write in 1899 that available experimental evidence was approximately consistent with the specific intensity law where and denote empirically measurable constants, and where and denote wavelength and temperature respectively. For theoretical reasons, Planck at that time accepted this formulation, which has an effective cut-off of short wavelengths.
Gustav Kirchhoff was Max Planck's teacher and surmised that there was a universal law for blackbody radiation and this was called "Kirchhoff's challenge". Planck, a theorist, believed that Wilhelm Wien had discovered this law and Planck expanded on Wien's work presenting it in 1899 to the meeting of the German Physical Society. Experimentalists Otto Lummer, Ferdinand Kurlbaum, Ernst Pringsheim Sr., and Heinrich Rubens did experiments that appeared to support Wien's law especially at higher frequency short wavelengths which Planck so wholly endorsed at the German Physical Society that it began to be called the Wien-Planck Law. However, by September 1900, the experimentalists had proven beyond a doubt that the Wien-Planck law failed at the longer wavelengths. They would present their data on October 19. Planck was informed by his friend Rubens and quickly created a formula within a few days. In June of that same year, Lord Rayleigh had created a formula that would work for short lower frequency wavelengths based on the widely accepted theory of equipartition. So Planck submitted a formula combining both Rayleigh's Law (or a similar equipartition theory) and Wien's law which would be weighted to one or the other law depending on wavelength to match the experimental data. However, although this equation worked, Planck himself said unless he could explain the formula derived from a "lucky intuition" into one of "true meaning" in physics, it did not have true significance. Planck explained that thereafter followed the hardest work of his life. Planck did not believe in atoms, nor did he think the second law of thermodynamics should be statistical because probability does not provide an absolute answer, and Boltzmann's entropy law rested on the hypothesis of atoms and was statistical. But Planck was unable to find a way to reconcile his Blackbody equation with continuous laws such as Maxwell's wave equations. So in what Planck called "an act of desperation", he turned to Boltzmann's atomic law of entropy as it was the only one that made his equation work. Therefore, he used the Boltzmann constant k and his new constant h to explain the blackbody radiation law which became widely known through his published paper.
Finding the empirical law
Max Planck produced his law on 19 October 1900 as an improvement upon the Wien approximation, published in 1896 by Wilhelm Wien, which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). In June 1900, based on heuristic theoretical considerations, Rayleigh had suggested a formula that he proposed might be checked experimentally. The suggestion was that the Stewart–Kirchhoff universal function might be of the form . This was not the celebrated Rayleigh–Jeans formula , which did not emerge until 1905, though it did reduce to the latter for long wavelengths, which are the relevant ones here. According to Klein, one may speculate that it is likely that Planck had seen this suggestion though he did not mention it in his papers of 1900 and 1901. Planck would have been aware of various other proposed formulas which had been offered. On 7 October 1900, Rubens told Planck that in the complementary domain (long wavelength, low frequency), and only there, Rayleigh's 1900 formula fitted the observed data well.
For long wavelengths, Rayleigh's 1900 heuristic formula approximately meant that energy was proportional to temperature, . It is known that and this leads to and thence to for long wavelengths. But for short wavelengths, the Wien formula leads to and thence to for short wavelengths. Planck perhaps patched together these two heuristic formulas, for long and for short wavelengths, to produce a formula
This led Planck to the formula
where Planck used the symbols and to denote empirical fitting constants.
Planck sent this result to Rubens, who compared it with his and Kurlbaum's observational data and found that it fitted for all wavelengths remarkably well. On 19 October 1900, Rubens and Kurlbaum briefly reported the fit to the data, and Planck added a short presentation to give a theoretical sketch to account for his formula. Within a week, Rubens and Kurlbaum gave a fuller report of their measurements confirming Planck's law. Their technique for spectral resolution of the longer wavelength radiation was called the residual ray method. The rays were repeatedly reflected from polished crystal surfaces, and the rays that made it all the way through the process were 'residual', and were of wavelengths preferentially reflected by crystals of suitably specific materials.
Trying to find a physical explanation of the law
Once Planck had discovered the empirically fitting function, he constructed a physical derivation of this law. His thinking revolved around entropy rather than being directly about temperature. Planck considered a cavity with perfectly reflective walls; inside the cavity, there are finitely many distinct but identically constituted resonant oscillatory bodies of definite magnitude, with several such oscillators at each of finitely many characteristic frequencies. These hypothetical oscillators were for Planck purely imaginary theoretical investigative probes, and he said of them that such oscillators do not need to "really exist somewhere in nature, provided their existence and their properties are consistent with the laws of thermodynamics and electrodynamics.". Planck did not attribute any definite physical significance to his hypothesis of resonant oscillators but rather proposed it as a mathematical device that enabled him to derive a single expression for the black body spectrum that matched the empirical data at all wavelengths. He tentatively mentioned the possible connection of such oscillators with atoms. In a sense, the oscillators corresponded to Planck's speck of carbon; the size of the speck could be small regardless of the size of the cavity, provided the speck effectively transduced energy between radiative wavelength modes.
Partly following a heuristic method of calculation pioneered by Boltzmann for gas molecules, Planck considered the possible ways of distributing electromagnetic energy over the different modes of his hypothetical charged material oscillators. This acceptance of the probabilistic approach, following Boltzmann, for Planck was a radical change from his former position, which till then had deliberately opposed such thinking proposed by Boltzmann. In Planck's words, "I considered the [quantum hypothesis] a purely formal assumption, and I did not give it much thought except for this: that I had obtained a positive result under any circumstances and at whatever cost." Heuristically, Boltzmann had distributed the energy in arbitrary merely mathematical quanta , which he had proceeded to make tend to zero in magnitude, because the finite magnitude had served only to allow definite counting for the sake of mathematical calculation of probabilities, and had no physical significance. Referring to a new universal constant of nature, , Planck supposed that, in the several oscillators of each of the finitely many characteristic frequencies, the total energy was distributed to each in an integer multiple of a definite physical unit of energy, , characteristic of the respective characteristic frequency. His new universal constant of nature, , is now known as the Planck constant.
Planck explained further that the respective definite unit, , of energy should be proportional to the respective characteristic oscillation frequency of the hypothetical oscillator, and in 1901 he expressed this with the constant of proportionality :
Planck did not propose that light propagating in free space is quantized. The idea of quantization of the free electromagnetic field was developed later, and eventually incorporated into what we now know as quantum field theory.
In 1906, Planck acknowledged that his imaginary resonators, having linear dynamics, did not provide a physical explanation for energy transduction between frequencies. Present-day physics explains the transduction between frequencies in the presence of atoms by their quantum excitability, following Einstein. Planck believed that in a cavity with perfectly reflecting walls and with no matter present, the electromagnetic field cannot exchange energy between frequency components. This is because of the linearity of Maxwell's equations. Present-day quantum field theory predicts that, in the absence of matter, the electromagnetic field obeys nonlinear equations and in that sense does self-interact. Such interaction in the absence of matter has not yet been directly measured because it would require very high intensities and very sensitive and low-noise detectors, which are still in the process of being constructed. Planck believed that a field with no interactions neither obeys nor violates the classical principle of equipartition of energy, and instead remains exactly as it was when introduced, rather than evolving into a black body field. Thus, the linearity of his mechanical assumptions precluded Planck from having a mechanical explanation of the maximization of the entropy of the thermodynamic equilibrium thermal radiation field. This is why he had to resort to Boltzmann's probabilistic arguments.
Planck's law may be regarded as fulfilling the prediction of Gustav Kirchhoff that his law of thermal radiation was of the highest importance. In his mature presentation of his own law, Planck offered a thorough and detailed theoretical proof for Kirchhoff's law, theoretical proof of which until then had been sometimes debated, partly because it was said to rely on unphysical theoretical objects, such as Kirchhoff's perfectly absorbing infinitely thin black surface.
Subsequent events
It was not until five years after Planck made his heuristic assumption of abstract elements of energy or of action that Albert Einstein conceived of really existing quanta of light in 1905 as a revolutionary explanation of black-body radiation, of photoluminescence, of the photoelectric effect, and of the ionization of gases by ultraviolet light. In 1905, "Einstein believed that Planck's theory could not be made to agree with the idea of light quanta, a mistake he corrected in 1906." Contrary to Planck's beliefs of the time, Einstein proposed a model and formula whereby light was emitted, absorbed, and propagated in free space in energy quanta localized in points of space. As an introduction to his reasoning, Einstein recapitulated Planck's model of hypothetical resonant material electric oscillators as sources and sinks of radiation, but then he offered a new argument, disconnected from that model, but partly based on a thermodynamic argument of Wien, in which Planck's formula played no role. Einstein gave the energy content of such quanta in the form . Thus Einstein was contradicting the undulatory theory of light held by Planck. In 1910, criticizing a manuscript sent to him by Planck, knowing that Planck was a steady supporter of Einstein's theory of special relativity, Einstein wrote to Planck: "To me it seems absurd to have energy continuously distributed in space without assuming an aether."
According to Thomas Kuhn, it was not till 1908 that Planck more or less accepted part of Einstein's arguments for physical as distinct from abstract mathematical discreteness in thermal radiation physics. Still in 1908, considering Einstein's proposal of quantal propagation, Planck opined that such a revolutionary step was perhaps unnecessary. Until then, Planck had been consistent in thinking that discreteness of action quanta was to be found neither in his resonant oscillators nor in the propagation of thermal radiation. Kuhn wrote that, in Planck's earlier papers and in his 1906 monograph, there is no "mention of discontinuity, [nor] of talk of a restriction on oscillator energy, [nor of] any formula like ." Kuhn pointed out that his study of Planck's papers of 1900 and 1901, and of his monograph of 1906, had led him to "heretical" conclusions, contrary to the widespread assumptions of others who saw Planck's writing only from the perspective of later, anachronistic, viewpoints. Kuhn's conclusions, finding a period till 1908, when Planck consistently held his 'first theory', have been accepted by other historians.
In the second edition of his monograph, in 1912, Planck sustained his dissent from Einstein's proposal of light quanta. He proposed in some detail that absorption of light by his virtual material resonators might be continuous, occurring at a constant rate in equilibrium, as distinct from quantal absorption. Only emission was quantal. This has at times been called Planck's "second theory".
It was not till 1919 that Planck in the third edition of his monograph more or less accepted his 'third theory', that both emission and absorption of light were quantal.
The colourful term "ultraviolet catastrophe" was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black-body radiation. But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of "catastrophe". It was first noted by Lord Rayleigh in 1900, and then in 1901 by Sir James Jeans; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh and by Jeans.
In 1913, Bohr gave another formula with a further different physical meaning to the quantity . In contrast to Planck's and Einstein's formulas, Bohr's formula referred explicitly and categorically to energy levels of atoms. Bohr's formula was where and denote the energy levels of quantum states of an atom, with quantum numbers and . The symbol denotes the frequency of a quantum of radiation that can be emitted or absorbed as the atom passes between those two quantum states. In contrast to Planck's model, the frequency has no immediate relation to frequencies that might describe those quantum states themselves.
Later, in 1924, Satyendra Nath Bose developed the theory of the statistical mechanics of photons, which allowed a theoretical derivation of Planck's law. The actual word 'photon' was invented still later, by G.N. Lewis in 1926, who mistakenly believed that photons were conserved, contrary to Bose–Einstein statistics; nevertheless the word 'photon' was adopted to express the Einstein postulate of the packet nature of light propagation. In an electromagnetic field isolated in a vacuum in a vessel with perfectly reflective walls, such as was considered by Planck, indeed the photons would be conserved according to Einstein's 1905 model, but Lewis was referring to a field of photons considered as a system closed with respect to ponderable matter but open to exchange of electromagnetic energy with a surrounding system of ponderable matter, and he mistakenly imagined that still the photons were conserved, being stored inside atoms.
Ultimately, Planck's law of black-body radiation contributed to Einstein's concept of quanta of light carrying linear momentum, which became the fundamental basis for the development of quantum mechanics.
The above-mentioned linearity of Planck's mechanical assumptions, not allowing for energetic interactions between frequency components, was superseded in 1925 by Heisenberg's original quantum mechanics. In his paper submitted on 29 July 1925, Heisenberg's theory accounted for Bohr's above-mentioned formula of 1913. It admitted non-linear oscillators as models of atomic quantum states, allowing energetic interaction between their own multiple internal discrete Fourier frequency components, on the occasions of emission or absorption of quanta of radiation. The frequency of a quantum of radiation was that of a definite coupling between internal atomic meta-stable oscillatory quantum states. At that time, Heisenberg knew nothing of matrix algebra, but Max Born read the manuscript of Heisenberg's paper and recognized the matrix character of Heisenberg's theory. Then Born and Jordan published an explicitly matrix theory of quantum mechanics, based on, but in form distinctly different from, Heisenberg's original quantum mechanics; it is the Born and Jordan matrix theory that is today called matrix mechanics. Heisenberg's explanation of the Planck oscillators, as non-linear effects apparent as Fourier modes of transient processes of emission or absorption of radiation, showed why Planck's oscillators, viewed as enduring physical objects such as might be envisaged by classical physics, did not give an adequate explanation of the phenomena.
Nowadays, as a statement of the energy of a light quantum, often one finds the formula , where , and denotes angular frequency, and less often the equivalent formula . This statement about a really existing and propagating light quantum, based on Einstein's, has a physical meaning different from that of Planck's above statement about the abstract energy units to be distributed amongst his hypothetical resonant material oscillators.
An article by Helge Kragh published in Physics World gives an account of this history.
| Physical sciences | Thermodynamics | Physics |
191149 | https://en.wikipedia.org/wiki/Paramecium | Paramecium | Paramecium ( , , plural "paramecia" only when used as a vernacular name) is a genus of eukaryotic, unicellular ciliates, widespread in freshwater, brackish, and marine environments. Paramecia are often abundant in stagnant basins and ponds. Because some species are readily cultivated and easily induced to conjugate and divide, they have been widely used in classrooms and laboratories to study biological processes. Paramecium species are commonly studied as model organisms of the ciliate group and have been characterized as the "white rats" of the phylum Ciliophora.
Historical background
Paramecium were among the first ciliates to be observed by microscopists, in the late 17th century. They were most likely known to the Dutch pioneer of protozoology, Antonie van Leeuwenhoek, and were clearly described by his contemporary Christiaan Huygens in a letter from 1678. The earliest known illustration of a Paramecium species was published anonymously in Philosophical Transactions of the Royal Society in 1703.
In 1718, the French mathematics teacher and microscopist Louis Joblot published a description and illustration of a microscopic (fish), which he discovered in an infusion of oak bark in water. Joblot gave this creature the name , or "slipper", and the phrase "slipper animalcule" remained in use as a colloquial epithet for Paramecium, throughout the 18th and 19th centuries.
The name "Paramecium" – constructed from the Greek (paramēkēs, "oblong") – was coined in 1752 by the English microscopist John Hill, who applied the name generally to "Animalcules which have no visible limbs or tails, and are of an irregularly oblong figure." In 1773, O. F. Müller, the first researcher to place the genus within the Linnaean system of taxonomy, adopted the name Paramecium but changed the spelling to Paramæcium. In 1783, Johann Hermann changed the spelling once more, to Paramœcium. C. G. Ehrenberg, in a major study of the infusoria published in 1838, restored Hill's original spelling for the name, and most researchers have followed his lead.
Description
Species of Paramecium range in size from 0.06 mm to 0.3 mm in length. Cells are typically ovoid, elongate, or foot- or cigar-shaped.
The body of the cell is enclosed by a stiff but elastic structure called the pellicle. The pellicle consists of an outer cell membrane (plasma membrane), a layer of flattened membrane-bound sacs called alveoli, and an inner membrane called the epiplasm. The pellicle is not smooth, but textured with hexagonal or rectangular depressions. Each of these polygons is perforated by a central aperture through which a single cilium projects. Between the alveolar sacs of the pellicle, most species of Paramecium have closely spaced spindle-shaped trichocysts, explosive organelles that discharge thin, non-toxic filaments, often used for defensive purposes.
Typically, an anal pore (cytoproct) is located on the ventral surface, in the posterior half of the cell. In all species, there is a deep oral groove running from the anterior of the cell to its midpoint. This is lined with inconspicuous cilia which beat continuously, drawing food into the cell. Paramecium are primarily heterotrophic, feeding on bacteria and other small organisms. A few species are mixotrophs, deriving some nutrients from endosymbiotic algae (chlorella) carried in the cytoplasm of the cell.
Osmoregulation is carried out by contractile vacuoles, which actively expel water from the cell to compensate for fluid absorbed by osmosis from its surroundings. The number of contractile vacuoles varies depending on the species.
Movement
A Paramecium propels itself by whip-like movements of the cilia, which are arranged in tightly spaced rows around the outside of the body. The beat of each cilium has two phases: a fast "effective stroke," during which the cilium is relatively stiff, followed by a slow "recovery stroke," during which the cilium curls loosely to one side and sweeps forward in a counter-clockwise fashion. The densely arrayed cilia move in a coordinated fashion, with waves of activity moving across the "ciliary carpet," creating an effect sometimes likened to that of the wind blowing across a field of grain.
The Paramecium spirals through the water as it progresses. When it happens to encounter an obstacle, the "effective stroke" of its cilia is reversed and the organism swims backward for a brief time, before resuming its forward progress. This is called the avoidance reaction. If it runs into the solid object again, it repeats this process, until it can get past the object.
It has been calculated that a Paramecium expends more than half of its energy in propelling itself through the water. This ciliary method of locomotion has been found to be less than 1% efficient. This low percentage is nevertheless close to the maximum theoretical efficiency that can be achieved by an organism equipped with cilia as short as those of the members of Paramecium.
Gathering food
Paramecium feed on microorganisms such as bacteria, algae, and yeasts. To gather food, the Paramecium makes movements with cilia to sweep prey organisms, along with some water, through the oral groove (vestibulum, or vestibule), and into the cell. The food passes from the cilia-lined oral groove into a narrower structure known as the buccal cavity (gullet). From there, food particles pass through a small opening called the cytostome, or cell mouth, and move into the interior of the cell. As food enters the cell, it is gathered into food vacuoles, which are periodically closed off and released into the cytoplasm, where they begin circulating through the cell body by the streaming movement of the cell contents, a process called cyclosis or cytoplasmic streaming. As a food vacuole moves along, enzymes from the cytoplasm enter it, to digest the contents. As enzymatic digestion proceeds, the vacuole contents become more acidic. Within five minutes of a vacuole's formation, the pH of its contents drops from 7 to 3. As digested nutrients pass into the cytoplasm, the vacuole shrinks. When the fully digested vacuole reaches the anal pore, it ruptures, expelling its waste contents outside the cell.
Symbiosis
Some species of Paramecium form mutualistic relationships with other organisms. Paramecium bursaria and Paramecium chlorelligerum harbour endosymbiotic green algae, from which they derive nutrients and a degree of protection from predators such as Didinium nasutum. Numerous bacterial endosymbionts have been identified in species of Paramecium. Some intracellular bacteria, known as kappa particles, give Paramecium the ability to kill other strains of Paramecium that lack kappa particles.
Genome
The genome of the species Paramecium tetraurelia has been sequenced, providing evidence for three whole-genome duplications.
In some ciliates, like Stylonychia and Paramecium, only UGA is decoded as a stop codon, while UAG and UAA are reassigned as sense codons (that is, codons that code for standard amino acids), coding for the amino acid glutamic acid.
Learning
The question of whether Paramecium exhibit learning has been the object of a great deal of experimentation, yielding equivocal results. However, a study published in 2006 seems to show that Paramecium caudatum may be trained, through the application of a 6.5 volt electric current, to discriminate between brightness levels. This experiment has been cited as a possible instance of cell memory, or epigenetic learning in organisms with no nervous system.
Reproduction and sexual phenomena
Reproduction
Like all ciliates, Paramecium have a dual nuclear apparatus, consisting of a polyploid macronucleus, and one or more diploid micronuclei. The macronucleus controls non-reproductive cell functions, expressing the genes needed for daily functioning. The micronucleus is the generative, or germline nucleus, containing the genetic material that is passed along from one generation to the next.
Paramecium reproduction is asexual, by binary fission, which has been characterized as "the sole mode of reproduction in ciliates" (conjugation being a sexual phenomenon, not directly resulting in increase of numbers). During fission, the macronucleus splits by a type of amitosis, and the micronuclei undergo mitosis. The cell then divides transversally, and each new cell obtains a copy of the micronucleus and the macronucleus.
Fission may occur spontaneously, in the course of the vegetative cell cycle. Under certain conditions, it may be preceded by self-fertilization (autogamy), or it may immediately follow conjugation, in which Paramecium of compatible mating types fuse temporarily and exchange genetic material.
Conjugation
In ciliates such as Paramecium, conjugation is a sexual phenomenon that results in genetic recombination and nuclear reorganization within the cell. During conjugation, two Paramecium of a compatible mating type come together and form a bridge between their cytoplasms. Their respective micronuclei undergo meiosis, and haploid micronuclei are exchanged over the bridge. Following conjugation, the cells separate. The old macronuclei are destroyed, and both post-conjugants form new macronuclei, by amplification of DNA in their micronuclei. Conjugation is followed by one or more "exconjugant divisions."
Stages of conjugation
In Paramecium caudatum, the stages of conjugation are as follows (see diagram at right):
Compatible mating strains meet and partly fuse
The micronuclei undergo meiosis, producing four haploid micronuclei per cell.
Three of these micronuclei disintegrate. The fourth undergoes mitosis.
The two cells exchange a micronucleus.
The cells then separate.
The micronuclei in each cell fuse, forming a diploid micronucleus.
Mitosis occurs three times, giving rise to eight micronuclei.
Four of the new micronuclei transform into macronuclei, and the old macronucleus disintegrates.
Binary fission occurs twice, yielding four identical daughter cells.
Aging
In the asexual fission phase of growth, during which cell divisions occur by mitosis rather than meiosis, clonal aging occurs leading to a gradual loss of vitality. In some species, such as the well studied Paramecium tetraurelia, the asexual line of clonally aging Paramecium loses vitality and expires after about 200 fissions if the cells fail to undergo autogamy or conjugation. The basis for clonal aging was clarified by transplantation experiments of Aufderheide in 1986. When macronuclei of clonally young Paramecium were injected into Paramecium of standard clonal age, the lifespan (clonal fissions) of the recipient was prolonged. In contrast, transfer of cytoplasm from clonally young Paramecium did not prolong the lifespan of the recipient. These experiments indicated that the macronucleus, rather than the cytoplasm, is responsible for clonal aging. Other experiments by Smith-Sonneborn, Holmes and Holmes, and Gilley and Blackburn demonstrated that, during clonal aging, DNA damage increases dramatically. Thus, DNA damage in the macronucleus appears to be the cause of aging in P. tetraurelia. In this single-celled protist, aging appears to proceed as it does in multicellular eukaryotes, as described in DNA damage theory of aging.
Meiosis and rejuvenation
When clonally aged P. tetraurelia are stimulated to undergo meiosis in association with either conjugation or automixis, the genetic descendants are rejuvenated, and are able to have many more mitotic binary fission divisions. During conjugation or automixis, the micronuclei of the cell(s) undergo meiosis, the old macronucleus disintegrates, and a new macronucleus is formed by replication of the micronuclear DNA that had recently undergone meiosis. There is apparently little, if any, DNA damage in the new macronucleus. These findings further support the idea that clonal aging is due, in large part, to a progressive accumulation of DNA damage; and that rejuvenation is due to the repair of this damage in the micronucleus during meiosis. Meiosis appears to be an adaptation for DNA repair and rejuvenation in P. tetraurelia. In P. tetraurelia, CtlP protein is a key factor needed for the completion of meiosis during sexual reproduction and recovery of viable sexual progeny. The CtlP and Mre11 nuclease complex are essential for accurate processing and repair of double-strand breaks during homologous recombination.
The adaptive benefit of meiosis and self-fertilization in response to starvation appears to be independent of the generation of any new genetic variation in P. tetraurelia. This observation suggests that the underlying molecular mechanism of meiosis provides a fitness advantage regardless of any concomitant effect of sex on genetic diversity.
Video gallery
List of species
Paramecium aurelia species complex:
Paramecium primaurelia
Paramecium biaurelia
Paramecium triaurelia
Paramecium tetraurelia
Paramecium pentaurelia
Paramecium sexaurelia
Paramecium septaurelia
Paramecium octaurelia
Paramecium novaurelia
Paramecium decaurelia
Paramecium undecaurelia
Paramecium dodecaurelia
Paramecium tredecaurelia
Paramecium quadecaurelia
Paramecium sonneborni
Other species:
Paramecium buetschlii
Paramecium bursaria
Paramecium calkinsi
Paramecium caudatum
Paramecium chlorelligerum
Paramecium duboscqui
Paramecium grohmannae
Paramecium jenningsi
Paramecium multimicronucleatum
Paramecium nephridiatum
Paramecium polycaryum
Paramecium putrinum
Paramecium schewiakoffi
Paramecium woodruffi
| Biology and health sciences | Other organisms | null |
191156 | https://en.wikipedia.org/wiki/Monazite | Monazite | Monazite is a primarily reddish-brown phosphate mineral that contains rare-earth elements. Due to variability in composition, monazite is considered a group of minerals. The most common species of the group is monazite-(Ce), that is, the cerium-dominant member of the group. It occurs usually in small isolated crystals. It has a hardness of 5.0 to 5.5 on the Mohs scale of mineral hardness and is relatively dense, about 4.6 to 5.7 g/cm3. There are five different most common species of monazite, depending on the relative amounts of the rare earth elements in the mineral:
monazite-(Ce), (the most common member),
monazite-(La), ,
monazite-(Nd), ,
monazite-(Sm), ,
monazite-(Pr), .
The elements in parentheses are listed in the order of their relative proportion within the mineral: lanthanum is the most common rare-earth element in monazite-(La), and so forth. Silica () is present in trace amounts, as well as small amounts of uranium and thorium. Due to the alpha decay of thorium and uranium, monazite contains a significant amount of helium, which can be extracted by heating.
The following analyses are of monazite from: (I.) Burke County, North Carolina, US; (II.) Arendal, Norway; (III.) Emmaville, New South Wales, Australia.
Monazite is an important ore for thorium, lanthanum, and cerium. It is often found in placer deposits. India, Madagascar, and South Africa have large deposits of monazite sands. The deposits in India are particularly rich in monazite.
Monazite is radioactive due to the presence of thorium and, less commonly, uranium. The radiogenic decay of uranium and thorium to lead enables monazite to be dated through monazite geochronology. Monazite crystals often have multiple distinct zones that formed through successive geologic events that lead to monazite crystallization. These domains can be dated to gain insight into the geologic history of its host rocks.
The name monazite comes from the (to be solitary), via German Monazit, in allusion to its isolated crystals.
Structure
All monazites adopt the same structure, meaning that the connectivity of the atoms is very similar to other compounds of the type . The M(III) centers have a distorted coordination sphere being surrounded by eight oxides with M–O distances around 2.6 Å in length. The phosphate anion is tetrahedral, as usual. The same structural motif is observed for lead chromate (). Monazite also shares many structural similarities with; zircon, xenotime, scheelite, anhydrite, barite, and rhabdophane.
Mining history
Monazite sand from Brazil was first noticed in sand carried in ship's ballast by Carl Auer von Welsbach in the 1880s. Von Welsbach was looking for thorium for his newly invented incandescent mantles. Monazite sand was quickly adopted as the thorium source and became the foundation of the rare-earth industry.
Monazite sand was also briefly mined in North Carolina, but, shortly thereafter, extensive deposits in southern India were found. Brazilian and Indian monazite dominated the industry before World War II, after which major mining activity transferred to South Africa. There are also large monazite deposits in Australia.
Monazite was the only significant source of commercial lanthanides, but because of concern over the disposal of the radioactive daughter products of thorium, bastnäsite came to displace monazite in the production of lanthanides in the 1960s due to its much lower thorium content. Increased interest in thorium for nuclear energy may bring monazite back into commercial use.
Mineralization and extraction
Because of their high density, monazite minerals concentrate in alluvial sands when released by the weathering of pegmatites. These so-called placer deposits are often beach or fossil beach sands and contain other heavy minerals of commercial interest such as zircon and ilmenite. Monazite can be isolated as a nearly pure concentrate by the use of gravity, magnetic, and electrostatic separation.
Monazite sand deposits are prevalently of the monazite-(Ce) composition. Typically, the lanthanides in such monazites contain about 45–48% cerium, about 24% lanthanum, about 17% neodymium, about 5% praseodymium, and minor quantities of samarium, gadolinium, and yttrium. Europium concentrations tend to be low, about 0.05%. South African "rock" monazite, from Steenkampskraal, was processed in the 1950s and early 1960s by the Lindsay Chemical Division of American Potash and Chemical Corporation, at the time the largest producer of lanthanides in the world. Steenkampskraal monazite provided a supply of the complete set of lanthanides. Very low concentrations of the heaviest lanthanides in monazite justified the term "rare" earth for these elements, with prices to match. Thorium content of monazite is variable and sometimes can be up to 20–30%. Monazite from certain carbonatites or from Bolivian tin ore veins is essentially thorium-free. However, commercial monazite sands typically contain between 6 and 12% thorium oxide.
Acid cracking
The original process for "cracking" monazite so as to extract the thorium and lanthanide content was to heat it with concentrated sulfuric acid to temperatures between for several hours. Variations in the ratio of acid to ore, the extent of heating, and the extent to which water was added afterwards led to several different processes to separate thorium from the lanthanides. One of the processes caused the thorium to precipitate out as a phosphate or pyrophosphate in crude form, leaving a solution of lanthanide sulfates, from which the lanthanides could be easily precipitated as a double sodium sulfate. The acid methods led to the generation of considerable acid waste, and loss of the phosphate content of the ore.
Alkaline cracking
A more recent process uses hot sodium hydroxide solution (73%) at about . This process allows the valuable phosphate content of the ore to be recovered as crystalline trisodium phosphate. The lanthanide/thorium hydroxide mixture can be treated with hydrochloric acid to provide a solution of lanthanide chlorides, and an insoluble sludge of the less-basic thorium hydroxide.
Extraction of rare-earth metals from monazite ore
The extraction of rare-earth metals from monazite ore begins with digestion with sulfuric acid followed by aqueous extraction. The process requires many neutralizations and filtrations.
The final products yielded for this process are thorium-phosphate concentrate, RE hydroxides, and uranium concentrate. Depending on the relative market prices of uranium, thorium, and rare earth elements as well as the availability of customers and the logistics of delivering to them, some or all of those products may be economical to sell or further process into a marketable form, while others constitute tailings for disposal. Products of the uranium and thorium decay series, particularly radium will be present in trace amounts and form a radiotoxic hazard. While radium-228 (a product of thorium decay) will be present only in extremely minute amounts (less than one milligram per metric ton of thorium), and will decay away with a half life of roughly 5.75 years, radium-226 will be present at a ratio above 300 milligrams per metric ton of uranium and due to its long half life (~1600 years) will essentially remain with the residue. As radium forms the least soluble alkaline earth metal sulfate known, radium sulfate will be present among the solid filtration products after sulfuric acid has been added.
Containing nuclear waste
In two studies, one testing synthetic monazites radioactive waste storage capabilities by submerging it in a contaminated wastewater system for an extended period of time, and the other comparing the durability of the crystalline structures of multiple minerals, they investigate the ability of monazite to act as a host for nuclear byproducts from high-grade plutonium in decommissioned nuclear weapons and spent fuel from nuclear reactors. Results from both investigations show that monazite is one of the better options for storage in comparison to the previously used borosilicate glass.
One study done at Oak Ridge National Laboratory in Tennessee the performance of synthetic monazite to borosilicate glass in radioactive waste management is compared. This experiment involved synthetic monazite and borosilicate glass being soaked in a contaminated simulated Savannah River defense wastes for 28 days, during the time period the leaching rates from both materials were measured. The results show that the synthetic monazite is a far more effective material for containing radioactive waste due to its low leaching rates and slow corrosion rate.
In a second study natural monazite is found to have an enhanced ability to deal with radiation byproducts due the property of radiation "resistance" as it is able to remain crystalline after being subjected to high amounts of alpha-decay radiation and becoming amorphized. Due to this high durability, it is seen as a better alternative for hosting materials such as radioactive strontium than other tested minerals. Synthetic monazite is also shown to have similar durability to that of the natural crystalline samples after it becomes fully amorphized.
| Physical sciences | Minerals | Earth science |
191159 | https://en.wikipedia.org/wiki/Bastn%C3%A4site | Bastnäsite | The mineral bastnäsite (or bastnaesite) is one of a family of three fluorocarbonate minerals, which includes bastnäsite-(Ce) with a formula of (Ce, La)CO3F, bastnäsite-(La) with a formula of (La, Ce)CO3F, and bastnäsite-(Y) with a formula of (Y, Ce)CO3F. Some of the bastnäsites contain OH− instead of F− and receive the name of hydroxylbastnasite. Most bastnäsite is bastnäsite-(Ce), and cerium is by far the most common of the rare earths in this class of minerals. Bastnäsite and the phosphate mineral monazite are the two largest sources of cerium and other rare-earth elements.
Bastnäsite was first described by the Swedish chemist Wilhelm Hisinger in 1838. It is named for the Bastnäs mine near Riddarhyttan, Västmanland, Sweden.
Bastnäsite also occurs as very high-quality specimens at the Zagi Mountains, Pakistan.
Bastnäsite occurs in alkali granite and syenite and in associated pegmatites. It also occurs in carbonatites and in associated fenites and other metasomatites.
Composition
Bastnäsite has cerium, lanthanum and yttrium in its generalized formula but officially the mineral is divided into three minerals based on the predominant rare-earth element. There is bastnäsite-(Ce) with a more accurate formula of (Ce, La)CO3F. There is also bastnäsite-(La) with a formula of (La, Ce)CO3F. And finally there is bastnäsite-(Y) with a formula of (Y, Ce)CO3F. There is little difference in the three in terms of physical properties and most bastnäsite is bastnäsite-(Ce). Cerium in most natural bastnäsites usually dominates the others. Bastnäsite and the phosphate mineral monazite are the two largest sources of cerium, an important industrial metal.
Bastnäsite is closely related to the mineral series parisite. The two are both rare-earth fluorocarbonates, but parisite's formula of contains calcium (and a small amount of neodymium) and a different ratio of constituent ions. Parisite could be viewed as a formula unit of calcite (CaCO3) added to two formula units of bastnäsite. In fact, the two have been shown to alter back and forth with the addition or loss of CaCO3 in natural environments.
Bastnäsite forms a series with the minerals hydroxylbastnäsite-(Ce) and hydroxylbastnäsite-(Nd). The three are members of a substitution series that involves the possible substitution of fluoride (F−) ions with hydroxyl (OH−) ions.
Name
Bastnäsite gets its name from its type locality, the Bastnäs Mine, Riddarhyttan, Västmanland, Sweden. Ore from the Bastnäs Mine led to the discovery of several new minerals and chemical elements by Swedish scientists such as Jöns Jakob Berzelius, Wilhelm Hisinger and Carl Gustav Mosander. Among these are the chemical elements cerium, which was described by Hisinger in 1803, and lanthanum in 1839. Hisinger, who was also the owner of the Bastnäs mine, chose to name one of the new minerals bastnäsit when it was first described by him in 1838.
Occurrence
Although a scarce mineral and never in great concentrations, it is one of the more common rare-earth carbonates. Bastnäsite has been found in karst bauxite deposits in Hungary, Greece and the Balkans region. Also found in carbonatites, a rare carbonate igneous intrusive rock, at the Fen Complex, Norway; Bayan Obo, Mongolia; Kangankunde, Malawi; Kizilcaoren, Turkey and the Mountain Pass rare earth mine in California, US. At Mountain Pass, bastnäsite is the leading ore mineral. Some bastnäsite has been found in the unusual granites of the Langesundsfjord area, Norway; Kola Peninsula, Russia; Mont Saint-Hilaire mines, Ontario, and Thor Lake deposits, Northwest Territories, Canada. Hydrothermal sources have also been reported.
The formation of hydroxylbastnasite (NdCO3OH) can also occur via the crystallization of a rare-earth bearing amorphous precursor. With increasing temperature, the habit of NdCO3OH crystals changes progressively to more complex spherulitic or dendritic morphologies. The development of these crystal morphologies has been suggested to be controlled by the level at which supersaturation is reached in the aqueous solution during the breakdown of the amorphous precursor. At higher temperature (e.g., 220 °C) and after rapid heating (e.g. < 1 h) the amorphous precursor breaks down rapidly and the fast supersaturation promotes spherulitic growth. At a lower temperature (e.g., 165 °C) and slow heating (100 min) the supersaturation levels are approached more slowly than required for spherulitic growth, and thus more regular triangular pyramidal shapes form.
Mining history
In 1949, the huge carbonatite-hosted bastnäsite deposit was discovered at Mountain Pass, San Bernardino County, California. This discovery alerted geologists to the existence of a whole new class of rare earth deposit: the rare earth containing carbonatite. Other examples were soon recognized, particularly in Africa and China. The exploitation of this deposit began in the mid-1960s after it had been purchased by Molycorp (Molybdenum Corporation of America). The lanthanide composition of the ore included 0.1% europium oxide, which was needed by the color television industry, to provide the red phosphor, to maximize picture brightness. The composition of the lanthanides was about 49% cerium, 33% lanthanum, 12% neodymium, and 5% praseodymium, with some samarium and gadolinium, or distinctly more lanthanum and less neodymium and heavies as compared to commercial monazite. The europium content was at least double that of a typical monazite. Mountain Pass bastnäsite was the world's major source of lanthanides from the 1960s to the 1980s. Thereafter, China became an increasingly important rare earth supply. Chinese deposits of bastnäsite include several in Sichuan Province, and the massive deposit at Bayan Obo, Inner Mongolia, which had been discovered early in the 20th century, but not exploited until much later. Bayan Obo is currently (2008) providing the majority of the world's lanthanides. Bayan Obo bastnäsite occurs in association with monazite (plus enough magnetite to sustain one of the largest steel mills in China), and unlike carbonatite bastnäsites, is relatively closer to monazite lanthanide compositions, with the exception of its generous 0.2% content of europium.
Ore technology
At Mountain Pass, bastnäsite ore was finely ground, and subjected to flotation to separate the bulk of the bastnäsite from the accompanying barite, calcite, and dolomite. Marketable products include each of the major intermediates of the ore dressing process: flotation concentrate, acid-washed flotation concentrate, calcined acid washed bastnäsite, and finally a cerium concentrate, which was the insoluble residue left after the calcined bastnäsite had been leached with hydrochloric acid. The lanthanides that dissolved as a result of the acid treatment were subjected to solvent extraction, to capture the europium, and purify the other individual components of the ore. A further product included a lanthanide mix, depleted of much of the cerium, and essentially all of samarium and heavier lanthanides. The calcination of bastnäsite had driven off the carbon dioxide content, leaving an oxide-fluoride, in which the cerium content had become oxidized to the less basic quadrivalent state. However, the high temperature of the calcination gave less-reactive oxide, and the use of hydrochloric acid, which can cause reduction of quadrivalent cerium, led to an incomplete separation of cerium and the trivalent lanthanides. By contrast, in China, processing of bastnäsite, after concentration, starts with heating with sulfuric acid.
Extraction of rare-earth metals
Bastnäsite ore is typically used to produce rare-earth metals. The following steps and process flow diagram detail the rare-earth-metal extraction process from the ore.
After extraction, bastnasite ore is typically used in this process, with an average of 7% REO (rare-earth oxides).
The ore goes through comminution using rod mills, ball mills, or autogenous mills.
Steam is consistently used to condition the ground ore, along with soda ash fluosilicate, and usually Tail Oil C-30. This is done to coat the various types of rare earth metals with either flocculent, collectors, or modifiers for easier separation in the next step.
Flotation using the previous chemicals to separate the gangue from the rare-earth metals.
Concentrate the rare-earth metals and filter out large particles.
Remove excess water by heating to ~100 °C.
Add HCl to solution to reduce pH to < 5. This enables certain REM (rare-earth metals) to become soluble (Ce is an example).
Oxidizing roast further concentrates the solution to approximately 85% REO. This is done at ~100 °C and higher if necessary.
Enables solution to concentrate further and filters out large particles again.
Reduction agents (based on area) are used to remove Ce as Ce carbonate or CeO2, typically.
Solvents are added (solvent type and concentration based on area, availability, and cost) to help separate Eu, Sm, and Gd from La, Nd, and Pr.
Reduction agents (based on area) are used to oxidize Eu, Sm, and Gd.
Eu is precipitated and calcified.
Gd is precipitated as an oxide.
Sm is precipitated as an oxide.
Solvent is recycled into step 11. Additional solvent is added based on concentration and purity.
La separated from Nd, Pr, and SX.
Nd and Pr separated. SX goes on for recovery and recycle.
One way to collect La is adding HNO3, creating La(NO3)3. HNO3 typically added at a very high molarity (1–5 M), depending on La concentration and amount.
Another method is to add HCl to La, creating LaCl3. HCl is added at 1 M to 5 M depending on La concentration.
Solvent from La, Nd, and Pr separation is recycled to step 11.
Nd is precipitated as an oxide product.
Pr is precipitated as an oxide product.
| Physical sciences | Minerals | Earth science |
191162 | https://en.wikipedia.org/wiki/Drainage%20basin | Drainage basin | A drainage basin is an area of land in which all flowing surface water converges to a single point, such as a river mouth, or flows into another body of water, such as a lake or ocean. A basin is separated from adjacent basins by a perimeter, the drainage divide, made up of a succession of elevated features, such as ridges and hills. A basin may consist of smaller basins that merge at river confluences, forming a hierarchical pattern.
Other terms for a drainage basin are catchment area, catchment basin, drainage area, river basin, water basin, and impluvium. In North America, they are commonly called a watershed, though in other English-speaking places, "watershed" is used only in its original sense, that of the drainage divide line.
A drainage basin's boundaries are determined by watershed delineation, a common task in environmental engineering and science.
In a closed drainage basin, or endorheic basin, rather than flowing to the ocean, water converges toward the interior of the basin, known as a sink, which may be a permanent lake, a dry lake, or a point where surface water is lost underground.
Drainage basins are similar but not identical to hydrologic units, which are drainage areas delineated so as to nest into a multi-level hierarchical drainage system. Hydrologic units are defined to allow multiple inlets, outlets, or sinks. In a strict sense, all drainage basins are hydrologic units but not all hydrologic units are drainage basins.
Major drainage basins of the world
Ocean basins
About 48.71% of the world's land drains to the Atlantic Ocean. In North America, surface water drains to the Atlantic via the Saint Lawrence River and Great Lakes basins, the Eastern Seaboard of the United States, the Canadian Maritimes, and most of Newfoundland and Labrador. Nearly all of South America east of the Andes also drains to the Atlantic, as does most of Western and Central Europe and the greatest portion of western Sub-Saharan Africa, as well as Western Sahara and part of Morocco.
The two major mediterranean seas of the world also flow to the Atlantic. The Caribbean Sea and Gulf of Mexico basin includes most of the U.S. interior between the Appalachian and Rocky Mountains, a small part of the Canadian provinces of Alberta and Saskatchewan, eastern Central America, the islands of the Caribbean and the Gulf, and a small part of northern South America. The Mediterranean Sea basin, with the Black Sea, includes much of North Africa, east-central Africa (through the Nile River), Southern, Central, and Eastern Europe, Turkey, and the coastal areas of Israel, Lebanon, and Syria.
The Arctic Ocean drains most of Western Canada and Northern Canada east of the Continental Divide, northern Alaska and parts of North Dakota, South Dakota, Minnesota, and Montana in the United States, the north shore of the Scandinavian peninsula in Europe, central and northern Russia, and parts of Kazakhstan and Mongolia in Asia, which totals to about 17% of the world's land.
Just over 13% of the land in the world drains to the Pacific Ocean. Its basin includes much of China, eastern and southeastern Russia, Japan, the Korean Peninsula, most of Indochina, Indonesia and Malaysia, the Philippines, all of the Pacific Islands, the northeast coast of Australia, and Canada and the United States west of the Continental Divide (including most of Alaska), as well as western Central America and South America west of the Andes.
The Indian Ocean's drainage basin also comprises about 13% of Earth's land. It drains the eastern coast of Africa, the coasts of the Red Sea and the Persian Gulf, the Indian subcontinent, Burma, and most parts of Australia.
Largest river basins
The five largest river basins (by area), from largest to smallest, are those of the Amazon (7 million km), the Congo (4 million km), the Nile (3.4 million km), the Mississippi (3.22 million km), and the (3.17 million km). The three rivers that drain the most water, from most to least, are the Amazon, Ganges, and Congo rivers.
Endorheic drainage basins
Endorheic basin are inland basins that do not drain to an ocean. Endorheic basins cover around 18% of the Earth's land. Some endorheic basins drain to an Endorheic lake or Inland sea. Many of these lakes are ephemeral or vary dramatically in size depending on climate and inflow. If water evaporates or infiltrates into the ground at its terminus, the area can go by several names, such playa, salt flat, dry lake, or alkali sink.
The largest endorheic basins are in Central Asia, including the Caspian Sea, the Aral Sea, and numerous smaller lakes. Other endorheic regions include the Great Basin in the United States, much of the Sahara Desert, the drainage basin of the Okavango River (Kalahari Basin), highlands near the African Great Lakes, the interiors of Australia and the Arabian Peninsula, and parts in Mexico and the Andes. Some of these, such as the Great Basin, are not single drainage basins but collections of separate, adjacent closed basins.
In endorheic bodies of water where evaporation is the primary means of water loss, the water is typically more saline than the oceans. An extreme example of this is the Dead Sea.
Importance
Geopolitical boundaries
Drainage basins have been historically important for determining territorial boundaries, particularly in regions where trade by water has been important. For example, the English crown gave the Hudson's Bay Company a monopoly on the fur trade in the entire Hudson Bay basin, an area called Rupert's Land. Bioregional political organization today includes agreements of states (e.g., international treaties and, within the US, interstate compacts) or other political entities in a particular drainage basin to manage the body or bodies of water into which it drains. Examples of such interstate compacts are the Great Lakes Commission and the Tahoe Regional Planning Agency.
Hydrology
In hydrology, the drainage basin is a logical unit of focus for studying the movement of water within the hydrological cycle. The process of finding a drainage boundary is referred to as watershed delineation. Finding the area and extent of a drainage basin is an important step in many areas of science and engineering.
Most of the water that discharges from the basin outlet originated as precipitation falling on the basin. A portion of the water that enters the groundwater system beneath the drainage basin may flow towards the outlet of another drainage basin because groundwater flow directions do not always match those of their overlying drainage network. Measurement of the discharge of water from a basin may be made by a stream gauge located at the basin's outlet. Depending on the conditions of the drainage basin, as rainfall occurs some of it seeps directly into the ground. This water will either remain underground, slowly making its way downhill and eventually reaching the basin, or it will permeate deeper into the soil and consolidate into groundwater aquifers.
As water flows through the basin, it can form tributaries that change the structure of the land. There are three different main types, which are affected by the rocks and ground underneath. Rock that is quick to erode forms dendritic patterns, and these are seen most often. The two other types of patterns that form are trellis patterns and rectangular patterns.
Rain gauge data is used to measure total precipitation over a drainage basin, and there are different ways to interpret that data. In the unlikely event that the gauges are many and evenly distributed over an area of uniform precipitation, using the arithmetic mean method will give good results. In the Thiessen polygon method, the drainage basin is divided into polygons with the rain gauge in the middle of each polygon assumed to be representative for the rainfall on the area of land included in its polygon. These polygons are made by drawing lines between gauges, then making perpendicular bisectors of those lines form the polygons. The isohyetal method involves contours of equal precipitation are drawn over the gauges on a map. Calculating the area between these curves and adding up the volume of water is time-consuming.
Isochrone maps can be used to show the time taken for runoff water within a drainage basin to reach a lake, reservoir or outlet, assuming constant and uniform effective rainfall.
Geomorphology
Drainage basins are the principal hydrologic unit considered in fluvial geomorphology. A drainage basin is the source for water and sediment that moves from higher elevation through the river system to lower elevations as they reshape the channel forms.
Ecology
Drainage basins are important in ecology. As water flows over the ground and along rivers it can pick up nutrients, sediment, and pollutants. With the water, they are transported towards the outlet of the basin, and can affect the ecological processes along the way as well as in the receiving water body.
Modern use of artificial fertilizers, containing nitrogen (as nitrates), phosphorus, and potassium, has affected the mouths of drainage basins. The minerals are carried by the drainage basin to the mouth, and may accumulate there, disturbing the natural mineral balance. This can cause eutrophication where plant growth is accelerated by the additional material.
Resource management
Because drainage basins are coherent entities in a hydrological sense, it has become common to manage water resources on the basis of individual basins. In the U.S. state of Minnesota, governmental entities that perform this function are called "watershed districts". In New Zealand, they are called catchment boards. Comparable community groups based in Ontario, Canada, are called conservation authorities. In North America, this function is referred to as "watershed management".
In Brazil, the National Policy of Water Resources, regulated by Act n° 9.433 of 1997, establishes the drainage basin as the territorial division of Brazilian water management.
When a river basin crosses at least one political border, either a border within a nation or an international boundary, it is identified as a transboundary river. Management of such basins becomes the responsibility of the countries sharing it. Nile Basin Initiative, OMVS for Senegal River, Mekong River Commission are a few examples of arrangements involving management of shared river basins.
Management of shared drainage basins is also seen as a way to build lasting peaceful relationships among countries.
Catchment factors
The catchment is the most significant factor determining the amount or likelihood of flooding.
Catchment factors are: topography, shape, size, soil type, and land use (paved or roofed areas). Catchment topography and shape determine the time taken for rain to reach the river, while catchment size, soil type, and development determine the amount of water to reach the river.
Topography
Generally, topography plays a big part in how fast runoff will reach a river. Rain that falls in steep mountainous areas will reach the primary river in the drainage basin faster than flat or lightly sloping areas (e.g., > 1% gradient).
Shape
Shape will contribute to the speed with which the runoff reaches a river. A long thin catchment will take longer to drain than a circular catchment.
Size
Size will help determine the amount of water reaching the river, as the larger the catchment the greater the potential for flooding. It is also determined on the basis of length and width of the drainage basin.
Soil type
Soil type will help determine how much water reaches the river. The runoff from the drainage area is dependent on the soil type. Certain soil types such as sandy soils are very free-draining, and rainfall on sandy soil is likely to be absorbed by the ground. However, soils containing clay can be almost impermeable and therefore rainfall on clay soils will run off and contribute to flood volumes. After prolonged rainfall even free-draining soils can become saturated, meaning that any further rainfall will reach the river rather than being absorbed by the ground. If the surface is impermeable the precipitation will create surface run-off which will lead to higher risk of flooding; if the ground is permeable, the precipitation will infiltrate the soil.
Land use
Land use can contribute to the volume of water reaching the river, in a similar way to clay soils. For example, rainfall on roofs, pavements, and roads will be collected by rivers with almost no absorption into the groundwater.
A drainage basin is an area of land where all flowing surface water converges to a single point, such as a river mouth, or flows into another body of water, such as a lake or ocean.
| Physical sciences | Hydrology | null |
191163 | https://en.wikipedia.org/wiki/Chytridiomycota | Chytridiomycota | Chytridiomycota are a division of zoosporic organisms in the kingdom Fungi, informally known as chytrids. The name is derived from the Ancient Greek (), meaning "little pot", describing the structure containing unreleased zoospores. Chytrids are one of the earliest diverging fungal lineages, and their membership in kingdom Fungi is demonstrated with chitin cell walls, a posterior whiplash flagellum, absorptive nutrition, use of glycogen as an energy storage compound, and synthesis of lysine by the -amino adipic acid (AAA) pathway.
Chytrids are saprobic, degrading refractory materials such as chitin and keratin, and sometimes act as parasites. There has been a significant increase in the research of chytrids since the discovery of Batrachochytrium dendrobatidis, the causal agent of chytridiomycosis.
Classification
Species of Chytridiomycota have traditionally been delineated and classified based on development, morphology, substrate, and method of zoospore discharge. However, single spore isolates (or isogenic lines) display a great amount of variation in many of these features; thus, these features cannot be used to reliably classify or identify a species. Currently, taxonomy in Chytridiomycota is based on molecular data, zoospore ultrastructure and some aspects of thallus morphology and development.
In an older and more restricted sense (not used here), the term "chytrids" referred just to those fungi in the class Chytridiomycetes. Here, the term "chytrid" refers to all members of Chytridiomycota.
The chytrids have also been included among the Protoctista, but are now regularly classed as fungi.
In older classifications, chytrids, except the recently established order Spizellomycetales, were placed in the class Phycomycetes under the subphylum Myxomycophyta of the kingdom Fungi. Previously, they were placed in the Mastigomycotina as the class Chytridiomycetes. The other classes of the Mastigomycotina, the Hyphochytriomycetes and oomycetes, were removed from the fungi to be classified as heterokont pseudofungi.
The class Chytridiomycetes has over 750 chytrid species distributed among ten orders. Additional classes include the Monoblepharidomycetes, with two orders, and the Hyaloraphidiomycetes with a single order.
Molecular phylogenetics, and other techniques such as ultrastructure analysis, has greatly increased the understanding of chytrid phylogeny, and led to the formation of several new zoosporic fungal phyla:
The order Blastocladiales, originally within the Chytridiomycota, are now classified as a separate phylum, the Blastocladiomycota.
The Neocallimastigales, originally an order of anaerobic fungi of the class Chytridiomycetes, found in the digestive tracts of herbivores, was later raised to a separate phylum, the Neocallimastigomycota.
The Olpidiaceae, including the type genus Olpidium, formerly classified in the order Chytridiales, were raised to a separate phylum, the Olpidiomycota.
Morphology
Life cycle
Chytridiomycota are unusual among the Fungi in that they reproduce with zoospores. For most members of Chytridiomycota, sexual reproduction is not known. Asexual reproduction occurs through the release of zoospores (presumably) derived through mitosis.
Where it has been described, sexual reproduction of chytrids occurs via a variety of methods. It is generally accepted that the resulting zygote forms a resting spore, which functions as a means of surviving adverse conditions. In some members, sexual reproduction is achieved through the fusion of isogametes (gametes of the same size and shape). This group includes the notable plant pathogens Synchytrium. Some algal parasites practice oogamy: A motile male gamete attaches itself to a nonmotile structure containing the female gamete. In another group, two thalli produce tubes that fuse and allow the gametes to meet and fuse. In the last group, rhizoids of compatible strains meet and fuse. Both nuclei migrate out of the zoosporangium and into the conjoined rhizoids where they fuse. The resulting zygote germinates into a resting spore.
Sexual reproduction is common and well known among members of the Monoblepharidomycetes. Typically, these chytrids practice a version of oogamy: The male is motile and the female is stationary. This is the first occurrence of oogamy in kingdom Fungi. Briefly, the monoblephs form oogonia, which give rise to eggs, and antheridia, which give rise to male gametes. Once fertilized, the zygote either becomes an encysted or motile oospore, which ultimately becomes a resting spore that will later germinate and give rise to new zoosporangia.
Upon release from the germinated resting spore, zoospores seek out a suitable substrate for growth using chemotaxis or phototaxis. Some species encyst and germinate directly upon the substrate; others encyst and germinate a short distance away. Once germinated, enzymes released from the zoospore begin to break down the substrate and utilize it produce a new thallus. Thalli are coenocytic and usually form no true mycelium (having rhizoids instead).
Chytrids have several different growth patterns. Some are holocarpic, which means they only produce a zoosporangium and zoospores. Others are eucarpic, meaning they produce other structures, such as rhizoids, in addition to the zoosporangium and zoospores. Some chytrids are monocentric, meaning a single zoospore gives rise to a single zoosporangium. Others are polycentric, meaning one zoospore gives rise to many zoosporangium connected by a rhizomycelium. Rhizoids do not have nuclei while a rhizomycelium can.
Growth continues until a new batch of zoospores are ready for release. Chytrids have a diverse set of release mechanisms that can be grouped into the broad categories of operculate or inoperculate. Operculate discharge involves the complete or incomplete detachment of a lid-like structure, called an operculum, allowing the zoospores out of the sporangium. Inoperculate chytrids release their zoospores through pores, slits, or papillae.
Habitats
Chytrids are aquatic fungi, though those that thrive in the capillary network around soil particles are typically considered terrestrial. The zoospore is primarily a means of thoroughly exploring a small volume of water for a suitable substrate rather than a means of long-range dispersal.
Chytrids have been isolated from a variety of aquatic habitats, including peats, bogs, rivers, ponds, springs, and ditches, and terrestrial habitats, such as acidic soils, alkaline soils, temperate forest soils, rainforest soils, Arctic and Antarctic soils. This has led to the belief that many chytrid species are ubiquitous and cosmopolitan. However, recent taxonomic work has demonstrated that this ubiquitous and cosmopolitan morphospecies hide cryptic diversity at the genetic and ultrastructural levels. It was first thought aquatic chytrids (and other zoosporic fungi) were primarily active in fall, winter, and spring. However, recent molecular inventories of lakes during the summer indicate that chytrids are an active, diverse part of the eukaryotic microbial community.
One of the least expected terrestrial environments the chytrid thrive in are periglacial soils. The population of the Chytridiomycota species are able to be supported even though there is a lack of plant life in these frozen regions due to the large amounts of water in periglacial soil and pollen blowing up from below the timberline.
Ecological functions
Batrachochytrium dendrobatidis
The chytrid Batrachochytrium dendrobatidis is responsible for chytridiomycosis, a disease of amphibians. Discovered in 1998 in Australia and Panama this disease is known to kill amphibians in large numbers, and has been suggested as a principal cause for the worldwide amphibian decline. Outbreaks of the fungus were found responsible for killing much of the Kihansi Spray Toad population in its native habitat of Tanzania, as well as the extinction of the golden toad in 1989. Chytridiomycosis has also been implicated in the presumed extinction of the Southern Gastric Brooding Frog, last seen in the wild in 1981, and the Northern Gastric Brooding Frog, last recorded in the wild in March 1985. The process leading to frog mortality is thought to be the loss of essential ions through pores made in the epidermal cells by the chytrid during its replication.
Recent research has revealed that elevating salt levels slightly may be able to cure chytridiomycosis in some Australian frog species, although further experimentation is needed.
Other parasites
Chytrids mainly infect algae and other eukaryotic and prokaryotic microbes. The infection can be so severe as to control primary production within the lake. It has been suggested that parasitic chytrids have a large effect on lake and pond food webs. Chytrids may also infect plant species; in particular, Synchytrium endobioticum is an important potato pathogen.
Saprobes
Arguably, the most important ecological function chytrids perform is decomposition. These ubiquitous and cosmopolitan organisms are responsible for decomposition of refractory materials, such as pollen, cellulose, chitin, and keratin.
There are also chytrids that live and grow on pollen by attaching threadlike structures, called rhizoids, onto the pollen grains. This mostly occurs during asexual reproduction because the zoospores that become attached to the pollen continuously reproduce and form new chytrids that will attach to other pollen grains for nutrients. This colonization of pollen happens during the spring time when bodies of water accumulate pollen falling from trees and plants.
Fossil record
The earliest fossils of chytrids are from the Scottish Rhynie chert, a Devonian-age lagerstätte with anatomical preservation of plants and fungi. Among the microfossils are chytrids preserved as parasites on rhyniophytes. These fossils closely resemble the modern genus Allomyces. Holocarpic chytrid remains were found in cherts from Combres in central France that date back to the late Visean. These remains were found along with eucarpic remains and are ambiguous in nature although they are thought to be of chytrids. Other chytrid-like fossils were found in cherts from the upper Pennsylvanian in the Saint-Etienne Basin in France, dating between 300~350 ma.
In fictional media
The novel Tom Clancy's Splinter Cell: Fallout (2007) features a species of chytrid that feeds on petroleum and oil-based products. In the story the species is modified using nuclear radiation, to increase the rate at which it feeds on oil. It is then used by Islamic extremists in an attempt to destroy the world's oil supplies, thereby taking away the technological advantage of the United States.
| Biology and health sciences | Basics | Plants |
191164 | https://en.wikipedia.org/wiki/Zygomycota | Zygomycota | Zygomycota, or zygote fungi, is a former division or phylum of the kingdom Fungi. The members are now part of two phyla: the Mucoromycota and Zoopagomycota. Approximately 1060 species are known. They are mostly terrestrial in habitat, living in soil or on decaying plant or animal material. Some are parasites of plants, insects, and small animals, while others form symbiotic relationships with plants. Zygomycete hyphae may be coenocytic, forming septa only where gametes are formed or to wall off dead hyphae. Zygomycota is no longer recognised as it was not believed to be truly monophyletic.
Etymology
The name Zygomycota refers to the zygosporangia characteristically formed by the members of this clade, in which resistant spherical spores are formed during sexual reproduction. Zygos is Greek for "joining" or "a yoke", referring to the fusion of two hyphal strands which produces these spores, and -mycota is a suffix referring to a division of fungi.
Spores
The term "spore" is used to describe a structure related to propagation and dispersal. Zygomycete spores can be formed through both sexual and asexual means. Before germination the spore is in a dormant state. During this period, the metabolic rate is very low and it may last from a few hours to many years. There are two types of dormancy. The exogenous dormancy is controlled by environmental factors such as temperature or nutrient availability. The endogenous or constitutive dormancy depends on characteristics of the spore itself; for example, metabolic features. In this type of dormancy, germination may be prevented even if the environmental conditions favor growth.
Mitospores
In zygomycetes, mitospores (sporangiospores) are formed asexually. They are formed in specialized structures, the mitosporangia (sporangia) that contain few to several thousand of spores, depending on the species. Mitosporangia are carried by specialized hyphae, the mitosporangiophores (sporangiophores). These specialized hyphae usually show negative gravitropism and positive phototropism allowing good spore dispersal. The sporangia wall is thin and is easily destroyed by mechanical stimuli (e.g. falling raindrops, passing animals), leading to the dispersal of the ripe mitospores. The walls of these spores contain sporopollenin in some species. Sporopollenin is formed out of β-carotene and is very resistant to biological and chemical degradation.
Zygomycete spores may also be classified in respect to their persistence:
Chlamydospores
Chlamydospores are asexual spores different from sporangiospores. The primary function of chlamydospores is the persistence of the mycelium and they are released when the mycelium degrades. Chlamydospores have no mechanism for dispersal. In zygomycetes the formation of chlamydospores is usually intercalar. However, it may also be terminal. In accordance with their function chlamydospores have a thick cell wall and are pigmented.
Zygophores
Zygophores are chemotropic aerial hyphae that are the sex organs of zygomycota, except for Phycomyces in which they are not aerial but found in the substratum. They have two different mating types (+) and (-). The opposite mating types grow towards each other due to volatile pheromones given off by the opposite strand, mainly trisporic acid and its precursors. Once two opposite mating types have made initial contact, they give rise to a zygospore through multiple steps.
Once contact between the zygophores has been made, their walls adhere to each other, flatten and then the contact site is referred to as the fusion septum. The tips of the zygophore become distended and form what is called the progametangia. A septum develops by gradual inward extension until it separates the terminal gametangia from the progametangial base. At this point the zygophore is then called the suspensor. Vesicles accumulate at the fusion septum at which time it begins to dissolve. A little before the fusion septum completely dissolves, the primary outer wall begins to thicken. This can be seen as dark patches on the primary wall as the fusion septum dissolves. These dark patches on the wall will eventually develop into warty structures that make up the thickness of the zygospore wall. As the zygospore enlarges, so do the warty structures until there are contiguous around the entire cell. At this point, electron microscopy can no longer penetrate the wall. Eventually the warts push through the primary wall and darken which is likely caused by melanin.
Meiosis usually occurs before zygospore germination and there are a few main types of distinguishable nuclear behavior. Type 1 is when the nuclei fuse quickly, within a few days, resulting in mature zygospore having haploid nuclei. Type 2 is when some nuclei do not pair and degenerate instead, meiosis is delayed until germination. Type 3 is when haploid nuclei continue to divide mitotically and then some associate into groups and some do not. This results in diploid and haploid nuclei being found in the germ sporangium.
Cell wall
Zygomycetes exhibit a special structure of cell wall. Most fungi have chitin as structural polysaccharide, while zygomycetes synthesize chitosan, the deacetylated homopolymer of chitin. Chitin is built of β-1,4 bonded N-acetyl glucosamine. Fungal hyphae grow at the tip. Therefore, specialized vesicles, the chitosomes, bring precursors of chitin and its synthesizing enzyme, chitin synthetase, to the outside of the membrane by exocytosis. The enzyme on the membrane catalyzes glycosidic bond formations from the nucleotide sugar substrate, uridine diphospho-N-acetyl-D-glucosamine. The nascent polysaccharide chain is then cleaved by the enzyme chitin deacetylase. The enzyme catalyzes the hydrolytic cleavage of the N-acetamido group in chitin. After this the chitosan polymer chain forms micro fibrils. These fibers are embedded in an amorphous matrix consisting of proteins, glucans (which putatively cross-link the chitosan fibers), mannoproteins, lipids and other compounds.
Trisporic acid
Trisporic acid is a C-18 terpenoid compound that is synthesized via β-carotene and retinol pathways in the zygomycetes. It is a pheromone compound responsible for sexual differentiation in those fungal species.
History
Trisporic acid was discovered in 1964 as a metabolite that caused enhanced carotene production in Blakeslea trispora. It was later shown to be the hormone that brought about zygophore production in Mucor mucedo. The American mycologist and geneticist Albert Francis Blakeslee discovered that some species of Mucorales were self-sterile (heterothallic), in which interactions of two strains, designated (+) and (-), are necessary for the initiation of sexual activity. This interaction was found by Hans Burgeff of the University of Goettingen to be due to the exchange of low molecular weight substances that diffused through the substratum and atmosphere. This work constituted the first demonstration of sex hormone activity in any fungus. The elucidation of the hormonal control of sexual interaction in the Mucorales extends over 60 years and involved mycologists and biochemists from Germany, Italy, the Netherlands, the UK and the USA.
Functions of trisporic acid in Mucorales
Recognition of compatible sexual partners in zygomycota is based on a cooperative biosynthesis pathway of trisporic acid. Early trisporoid derivatives and trisporic acid induce swelling of two potential hyphae, hence called zygophores, and a chemical gradient of these inducer molecules results in a growth towards each other. These progametangia come in contact with each other and build a strong connection. In the next stage, septae are established to limit the developing zygospore from the vegetative mycelium and in this way the zygophores become suspensor hyphae and gametangia are formed. After dissolving of the fusion wall, cytoplasm and a high number of nuclei from both gametangia are mixed. A selectional process (unstudied) results in a reduction of nuclei and meiosis takes place (also unstudied until today). Several cell wall modifications, as well as incorporation of sporopollenin (responsible for the dark colour of spores) take place resulting in a mature zygospore.
Trisporic acid, as the endpoint of this recognition pathway, can solely be produced in presence of both compatible partners, which enzymatically produce trisporoid precursors to be further utilized by the potential sexual partner. Species specificity of these reactions is among others obtained by spatial segregation, physicochemical features of derivatives (volatility and light sensitivity), chemical modifications of trisporoids and transcriptional/posttranscriptional regulation.
Parasexualism
Trisporoids are also used in the mediation of the recognition between parasite and host. An example is the host-parasite interaction of a parasexual nature observed between Parasitella parasitica, a facultative mycoparasite of zygomycetes, and Absidia glauca. This interaction is an example for biotrophic fusion parasitism, because genetic information is transferred into the host. Many morphological similarities in comparison to zygospore formation are seen, but the mature spore is called a sikyospore and is parasitic. During this process, gall-like structures are produced by the host Absidia glauca.
This, coupled with further evidence, has led to the assumption that trisporoids are not strictly species-specific, but that they might represent the general principle of mating recognition in Mucorales.
Phototropism
Light regulation has been investigated in the zygomycetes Phycomyces blakesleeanus, Mucor circinelloides and Pilobolus crystallinus. For example, in Pilobolus crystallinus light is responsible for the dispersal mechanism and the sporangiophores of Phycomyces blakesleeanus grow towards light. When light, particularly blue light, is involved in the regulation of fungal development, it directs the growth of fungal structures and activates metabolic pathways. For instance, the zygomycota use light as signal to promote vegetative reproduction and growth of aerial hyphae to facilitate spore dispersal.
Fungal phototropism has been investigated in detail using the fruiting body, sporangiophore, of Phycomyces as a model. Phycomyces has a complex photoreceptor system. It is able to react to different light intensities and different wavelengths. In contrast to the positive reaction to blue light, there is also a negative reaction to UV light. Reactions to red light were also observed.
Activation of beta-carotene biosynthesis by light
The two genes for the enzymes phytoene desaturase (carB) and the bifunctional phytoene
synthase/carotene cyclase (carRA in Phycomyces, carRP in Mucor) are responsible for synthesis of beta-carotene. The product of the gene crgA, which was found in Mucor suppresses the carotene formation by inhibiting the accumulation of carB and carRP mRNAs.
Influence of light in sporulation and sexual development
The zygomycete P. blakesleeanus builds two types of sporangiophores, the macrophores and the microphores which differ in size. The formation of these sporangiophores work at different light fluences and therefore with specific photoreceptors. Light also regulates asexual sporulation. In Mucor the product of the crgA gene acts as an activator. In contrast, the sexual development of Phycomyces is inhibited by light because of a specialized photoreceptor system.
Gravitropism
Gravitropism is a turning or growth movement by a plant or fungus in response to gravity. It is equally widespread in both kingdoms. Statolites are required in both fungi and plants for the mechanism of gravity-sensing. The Zygomycota sporangiophores originate from specialized “basal hyphae” and pass through several distinctive developmental stages until the mature asexual spores are released. In addition to the positive phototropism, the sporangiophores are directed by a negative gravitropic response into a position suitable for spore dispersal and distribution. Both responses are growth reactions i.e. the bending is caused by differential growth on the respective opposite flanks of the sporangiophore, and influence each other. The only model for the mechanism of the gravitropic reaction of Phycomyces is based on the floatability of the vacuole within the surrounding cytoplasm. The resulting asymmetric distribution of the cytoplasm is proposed to generate increased wall growth on the lower side of horizonally placed sporangiophores as in the thicker cytoplasmic layer forming there the number of vesicles secreting cell-wall material would be higher than on the upper side. Gravitropic bending starts after approximately 15 – 30 min in horizontally placed sporangiophores and continues until after, approximately 12 – 14 hours, the sporangiophore tip has recovered its original vertical position. Usually, the gravitropic response is weaker compared to the phototrophic one. However, in certain conditions, equilibrium could be established and the responses are comparable. In plants and fungi, phototropism and gravitropism interact in a complex manner. During continuous irradiation with unilateral light, the sporangiophore (fruiting body) of the zygomycete fungus, Phycomyces blakesleeanus reach a bending angle of photogravitropic equilibrium at which the gravitropic and phototropic stimuli balance each other (Fig. 1, bending angle +α, due to light irradiation).
Protein crystals involved in graviperception
In Phycomyces blakesleeanus, wild type sporangiophores contain large, easily seen octahedral paracrystalline crystals with size up to 5×5×5 μm. Generally, they are found near the main vacuole in clusters consisting of more than ten crystals. They are often associated to the vacuolar transepts. Sedimentation with speed of about 100 μm/s can be observed when the sporangiophores are tilted. Sliding along during sedimentation or pulling at the vacuolar membranes and transepts serves as an inter-cellular signal to a probable cytoskeleton response, and that activates receptors located in the cell membrane. These receptors in turn trigger a chain of events which finally leads to the asymmetrical growth of the cell wall. Studies of the bending angle of wild type and mutant strain sporangiophore growth have shown that mutant strains that do not have crystals exhibit reduced gravitropic response.
Lipid droplets involved in graviperception
Complex of apical lipid globules are also involved in graviperception. These lipids are clustered in cellular structures, complex of lipid globules, about 0.1mm below the very tip of the apex. (Fig. 2) The globules migrate to the columella when the sporangium is formed. In mature stage this complex is believed to act as a gravireceptor due to its floatability. Mutants that lack this lipid complex show greatly lowered gravitropic response.
Phylogeny
Historically, all fungi producing a zygospore were considered to be related and placed into Zygomycota. The use of molecular phylogenetics has increasingly revealed this grouping to be paraphyletic. However, the rank (i.e., phylum or subphylum) these clades is in dispute. What follows is a phylogeny of fungi with the zygomycete subphyla derived from Spatafora et al. (2016) with both possible phylum names.
Industrial uses
Many species of zygomycetes can be used in important industrial processes. A resume of them is presented in the table.
Culture conditions
The zygomycetes are able to grow in a wide range of environments. Most of them are mesophilic (growing at 10–40 °C with an optimum 20–35 °C), but some, like Mucor miehei or Mucor pusillus, are thermophilic with a minimum growth temperature of about 20 °C and maximum extending up to 60 °C. Others like Mucor hiemalis can grow at temperatures below 0 °C.
Some species of the order Mucorales are able to grow under anaerobic conditions, while most of them require aerobic conditions. Furthermore, while the majority of the zygomycetes only grow at high water activities, some of them are able to grow in salt concentrations of at least 15%. Most species of Mucor grow rapidly on agar at room temperature filling the Petri dish in 2–3 days with their coarse aerial mycelium. When incubated in liquid culture under semi-anaerobic conditions, several species grow in yeast like state. Zygospore formation may be stimulated at higher temperatures of incubation (30–40 °C).
Growth of Zygomycota in solid agar can produce low or very high fibrous colony that rapidly fills the entire Petri dish. Its color may range from pure white to shades of gray or brown. In old cultures, dark pigmented sporangia are observed. Everything depends on the species and the media used. In liquid culture, Zygomycota usually form a bland mass and do not produce spores. This is because they cannot grow aerial hyphae.
Culture media
Zygomycetes grow well on most standard fungal culture medium such as Sabouraud dextrose agar. They can also grow on both selective and non-selective media. Minimal media, supplementary media and induction media can also be used. Most zygomycetes are sensitive to cycloheximide (actidione) and this agent should not be used in culture media.
Reproduction
A common example of a zygomycete is black bread mold (Rhizopus stolonifer), a member of the Mucorales. It spreads over the surface of bread and other food sources, sending hyphae inward to absorb nutrients. In its asexual phase it develops bulbous black sporangia at the tips of upright hyphae, each containing hundreds of haploid spores.
As in most zygomycetes, asexual reproduction is the most common form of reproduction. Sexual reproduction in Rhizopus stolonifer, as in other zygomycetes, occurs when haploid hyphae of different mating types are in close proximity to each other. Growth of the gametangia commences after gametangia come in contact, and plasmogamy, or the fusion of the cytoplasm, occurs. Karyogamy, which is the fusion of the nuclei, follows closely after. The zygosporangia are then diploid. Zygosporangia are typically thick-walled, highly resilient to environmental hardships, and metabolically inert. When conditions improve, however, they germinate to produce a sporangium or vegetative hyphae. Meiosis occurs during germination of the zygosporangium so the resulting spores or hyphae are haploid. Grows in warm and damp conditions.
Some zygomycetes disperse their spores in a more precise manner than simply allowing them to drift aimlessly on air currents. Pilobolus, a fungus which grows on animal dung, bends its sporangiophores towards light with the help of a light sensitive pigment (beta-carotene) and then "fires" them with an explosive squirt of high-pressure cytoplasm. Sporangia can be launched as far as 2 m, placing them far away from the dung and hopefully on vegetation which will be eaten by an herbivore, eventually to be deposited with dung elsewhere. Different mechanisms for forcible spore discharge have evolved among members of the zygomycete order Entomophthorales.
Evolution of conidia
The evolution of the conidium from the sporangiospore is the main defining difference between zygomycetes and ascomycetes. The evolution of sporangiospores typical of zygomycetes to conidia similar to those found in ascomycetes can be modeled by a series of forms seen in zygomycetes. Many zygomycetes produce multiple sporangiospores inside a single sporangium. Some have evolved multiple small sporangiola that contain few sporangiospores. In some cases, there may be a few as three spores in each sporangiolum, and a few species have sporangiola which contain just a single spore. Choanephora, a zygomycete, has a sporangiolum that contains one spore with a sporangium wall that is visible at the base of the sporangium. This structure is similar to a conidium, which has two, fused cell walls, an inner spore wall and an outer sporangium wall.
| Biology and health sciences | Fungi | null |
191214 | https://en.wikipedia.org/wiki/Mandarin%20orange | Mandarin orange | A mandarin orange (Citrus reticulata), often simply called mandarin, is a small, rounded citrus tree fruit. Treated as a distinct species of orange, it is usually eaten plain or in fruit salads. The mandarin is small and oblate, unlike the roughly spherical sweet orange (which is a mandarin-pomelo hybrid). The taste is sweeter and stronger than the common orange. A ripe mandarin orange is firm to slightly soft, heavy for its size, and pebbly-skinned. The peel is thin and loose, with little white mesocarp, so they are usually easier to peel and to split into segments. Hybrids have these traits to lesser degrees. The mandarin orange is tender and is damaged easily by cold. It can be grown in tropical and subtropical areas.
According to genetic studies, the wild mandarin was one of the original citrus species; through breeding or natural hybridization, it is the ancestor of many hybrid citrus cultivars. With the citron and pomelo, it is the ancestor of the most commercially important hybrids (such as sweet and sour oranges, grapefruit, and many lemons and limes). Though the ancestral mandarin orange was bitter, most commercial mandarin strains derive from hybridization with the pomelo, which gives them sweet fruit.
Etymology
The name mandarin orange is a calque of Swedish mandarin apelsin [apelsin from German Apfelsine (Apfel + Sina), meaning Chinese apple], first attested in the 18th century. The Imperial Chinese term "mandarine" was first adopted by the French for this fruit. The reason for the epithet is not clear.
Citrus reticulata is from Latin, where reticulata means "netted".
Description
Tree
Citrus reticulata is a moderate-sized tree some in height. The tree trunk and major branches have thorns. The leaves are shiny, green, and rather small. The petioles are short, almost wingless or slightly winged. The flowers are borne singly or in small groups in the leaf-axils. Citrus are usually self-fertile (needing only a bee to move pollen within the same flower) or parthenocarpic (not needing pollination and therefore seedless, such as the satsuma). A mature mandarin tree can yield up to of fruit.
Fruit
Mandarin orange fruits are small . Their color is orange, yellow-orange, or red-orange. The skin is thin and peels off easily. Their easiness to peel is an important advantage of mandarin oranges over other citrus fruits. Just like with other citrus fruits, mandarin is separated easily from the segments. The fruits may be seedless or contain a small number of seeds. Though the ancestral mandarin orange was bitter, most commercial mandarin strains derive from hybridization with pomelo, which gives them sweet fruit. They can be eaten as whole or squeezed to make juice. A ripe mandarin orange is firm to slightly soft, heavy for its size, and pebbly-skinned. The peel is thin and loose, with little white mesocarp, so they are easy to peel and to split into segments.
Evolution
Origins
The wild mandarin is one of the pure ancestral citrus taxa; they evolved in a restricted region of South China and Vietnam.
Domestication
Mandarins appear to have been domesticated at least twice, in the north and south Nanling Mountains, derived from separate wild subspecies. Wild mandarins are still found there, including Daoxian mandarines (sometimes given the species name Citrus daoxianensis) as well as some members of the group traditionally called 'Mangshan wild mandarins', a generic grouping for the wild mandarin-like fruit of the Mangshan area that includes both true mandarins (mangshanyeju, the southern subspecies) and the genetically distinct and only distantly-related Mangshanyegan. The wild mandarins were found free of the introgressed pomelo (C. maxima) DNA found in domestic mandarins. Still, they did appear to have small amounts (~1.8%) of introgression from the ichang papeda, which grows wild in the same region.
The Nanling Mountains are home to northern and southern genetic clusters of domestic mandarins that have similar levels of sugars in the fruit compared to their wild relatives but appreciably (in some almost 90-fold) lower levels of citric acid. The clusters display different patterns of pomelo introgression, have different deduced historical population histories, and are most closely related to distinct wild mandarins, suggesting two independent domestications in the north and south. All tested domesticated cultivars belong to one of these two genetic clusters, with varieties such as Nanfengmiju, Kishu and Satsuma from the northern domestication event producing larger, redder fruit, while varieties such as Willowleaf, Dancy, Sunki, Cleopatra, King, and Ponkan belong to the smaller, yellower-fruited southern cluster.
Taxonomy
The Tanaka classification system divided domestic mandarins and similar fruit into numerous species, giving distinct names to cultivars such as willowleaf mandarins (C. deliciosa), satsumas (C. unshiu), tangerines (C. tangerina). Under the Swingle system, all these are considered to be varieties of a single species, Citrus reticulata. Hodgson represented them as several subgroups: common (C. reticulata), Satsuma, King (C. nobilis), Mediterranean (willowleaf), small-fruited (C. indica, C. tachibana and C. reshni), and mandarin hybrids. In the genomic-based species taxonomy of Ollitrault et al., only pure wild type mandarins would fall under C. reticulata, while the pomelo admixture found in the majority of mandarins would cause them to be classified as varieties of the hybrid bitter orange, C. aurantium.
Genetic analysis is consistent with continental mandarins representing a single species, varying due to hybridization. An island species, Citrus ryukyuensis that diverged 2 to 3 million years ago when cut off by rising sea levels, was found on Okinawa Island. Its hybridization with the mainland species has produced unique island cultivars in Japan and Taiwan, such as the Tachibana orange the Shekwasha, and Nanfengmiju. They have some pomelo DNA, like all domesticated mandarins. Northern and southern domesticates contain different pomelo contributions. An 'acidic' group including Sunki and Cleopatra mandarins likewise contain small regions of introgressed pomelo DNA; they are too sour to eat, but are widely used as rootstock and grown for juice. Another group, including some tangerines, satsuma and king mandarins, show more pomelo contribution. Hybrid mandarins thus fall on a continuum of increasing pomelo contribution with clementines, sweet and sour oranges, and grapefruit.
Production
In 2022, world production of mandarin oranges (combined with tangerines, clementines, and satsumas in reporting to FAOSTAT) was 44.2 million tonnes, led by China with 61% of the global total. Spain produced 1.8 million tonnes in 2022, with Turkey, Egypt, and Morocco as other significant producers.
Uses
Nutrition
A mandarin orange contains 85% water, 13% carbohydrates, and negligible amounts of fat and protein (table). Among micronutrients, only vitamin C is in significant content (32% of the Daily Value) in a 100-gram reference serving, with all other nutrients in low amounts.
Culinary
Mandarins have a stronger and sweeter taste than sweet oranges.
Mandarins are peeled and eaten fresh or used in salads, desserts and main dishes. Fresh mandarins are used in the production of the liqueur Mandarine Napoléon.
The peel is used fresh, whole or as zest, or dried as chenpi. It can be used as a spice for cooking, baking, drinks, or candy. Essential oil from the fresh peel may be used as a flavouring for candy, in gelatins, ice cream, chewing gum, and baked goods. It is used as a flavouring in some liqueurs.
Cultural significance
In North America, mandarins are commonly purchased in 5- or 10-pound boxes, individually wrapped in soft green paper, and given in Christmas stockings. This custom goes back to the 1880s when Japanese immigrants in Canada and the United States began receiving Japanese mandarin oranges from their families back home as gifts for the New Year. The tradition spread among the non-Japanese population and eastwards across the country: each November harvest, "The oranges were quickly unloaded and shipped east by rail. 'Orange Trains' – trains with boxcars painted orange – alerted everyone along the way that the irresistible oranges from Japan were back again for the holidays. For many, the arrival of Japanese mandarin oranges signalled the beginning of the holiday season." Satsumas were grown in the United States from the early 1900s. Still, Japan remained a major supplier. U.S. imports of these Japanese oranges was suspended due to hostilities with Japan during World War II. While they were one of the first Japanese goods allowed for export after the end of the war, residual hostility led to the rebranding of these oranges as "Mandarin" oranges instead of "Japanese" oranges. The delivery of the first batch of mandarin oranges from Japan in the port of Vancouver is greeted with a festival that combines Santa Claus and Japanese dancers—young girls dressed in traditional kimono. Historically, the Christmas fruit sold in North America was mostly Dancys, but now it is more often a hybrid. This Japanese tradition merged with European traditions related to the Christmas stocking. Saint Nicholas is said to have put gold coins into the stockings of three poor girls so that they would be able to afford to get married. Sometimes the story is told with gold balls instead of bags of gold, and oranges became a symbolic stand-in for these gold balls, and are put in Christmas stockings in Canada. Their use as Christmas gifts probably spread from the Japanese immigrant community. Mandarin oranges are mentioned in Sinclair Ross' 1942 novel, As for Me and My House, and his 1939 short story, Cornet at Night.
| Biology and health sciences | Sapindales | null |
191295 | https://en.wikipedia.org/wiki/Aswan%20Dam | Aswan Dam | The Aswan Dam, or Aswan High Dam, is one of the world's largest embankment dams, which was built across the Nile in Aswan, Egypt, between 1960 and 1970. When it was completed, it was the tallest earthen dam in the world, surpassing the Chatuge Dam in the United States. The dam, which created the Lake Nasser reservoir, was built upstream of the Aswan Low Dam, which had been completed in 1902 and was already at its maximum utilization. Construction of the High Dam became a key objective of the military regime that took power following the 1952 Egyptian Revolution. With its ability to better control flooding, provide increased water storage for irrigation and generate hydroelectricity, the dam was seen as pivotal to Egypt's planned industrialization. Like the earlier implementation, the High Dam has had a significant effect on the economy and culture of Egypt.
Before the High Dam was built, even with the old dam in place, the annual flooding of the Nile during late summer had continued to pass largely unimpeded down the valley from its East African drainage basin. These floods brought high water with natural nutrients and minerals that annually enriched the fertile soil along its floodplain and delta; this predictability had made the Nile valley ideal for farming since ancient times. However, this natural flooding varied, since high-water years could destroy the whole crop, while low-water years could create widespread drought and consequently famine. Both these events had continued to occur periodically. As Egypt's population grew and technology increased, both a desire and the ability developed to completely control the flooding, and thus both protect and support farmland and its economically important cotton crop. With the greatly increased reservoir storage provided by the High Aswan Dam, the floods could be controlled and the water could be stored for later release over multiple years.
The Aswan Dam was designed by Nikolai Aleksandrovich Malyshev of the Moscow-based Hydroproject Institute. Designed for both irrigation and power generation, the dam incorporates a number of relatively new features, including a very deep grout curtain below its base. Although the reservoir will eventually silt in, even the most conservative estimates indicate the dam will give at least 200 years of service.
Construction history
The earliest recorded attempt to build a dam near Aswan was in the 11th century, when the Arab polymath and engineer Ibn al-Haytham (known as Alhazen in the West) was summoned to Egypt by the Fatimid Caliph, Al-Hakim bi-Amr Allah, to regulate the flooding of the Nile, a task requiring an early attempt at an Aswan Dam. His field work convinced him of the impracticality of this scheme.
Aswan Low Dam, 1898–1902
The British began construction of the first dam across the Nile in 1898. Construction lasted until 1902 and the dam was opened on 10 December 1902. The project was designed by Sir William Willcocks and involved several eminent engineers, including Sir Benjamin Baker and Sir John Aird, whose firm, John Aird & Co., was the main contractor.
Aswan High Dam prelude, 1954–1960
In 1952, the Greek-Egyptian engineer Adrian Daninos began to develop the plan of the new Aswan Dam. Although the Low Dam was almost overtopped in 1946, the government of King Farouk showed no interest in Daninos's plans. Instead the Nile Valley Plan by the British hydrologist Harold Edwin Hurst was favored, which proposed to store water in Sudan and Ethiopia, where evaporation is much lower. The Egyptian position changed completely after the overthrow of the monarchy, led by the Free Officers Movement including Gamal Abdel Nasser. The Free Officers were convinced that the Nile Waters had to be stored in Egypt for political reasons, and within two months, the plan of Daninos was accepted. Initially, both the United States and the USSR were interested in helping development of the dam. Complications ensued due to their rivalry during the Cold War, as well as growing intra-Arab tensions.
In 1955, Nasser was claiming to be the leader of Arab nationalism, in opposition to the traditional monarchies, especially the Hashemite Kingdom of Iraq following its signing of the 1955 Baghdad Pact. At that time the U.S. feared that communism would spread to the Middle East, and it saw Nasser as a natural leader of an anticommunist procapitalist Arab League. America and the United Kingdom offered to help finance construction of the High Dam, with a loan of $270 million, in return for Nasser's leadership in resolving the Arab-Israeli conflict. While opposed to communism, capitalism, and imperialism, Nasser identified as a tactical neutralist, and sought to work with both the U.S. and the USSR for Egyptian and Arab benefit. After the UN criticized a raid by Israel against Egyptian forces in Gaza in 1955, Nasser realized that he could not portray himself as the leader of pan-Arab nationalism if he could not defend his country militarily against Israel. In addition to his development plans, he looked to quickly modernize his military, and he turned first to the U.S. for aid.
American Secretary of State John Foster Dulles and President Dwight Eisenhower told Nasser that the U.S. would supply him with weapons only if they were used for defensive purposes and if he accepted American military personnel for supervision and training. Nasser did not accept these conditions, and consulted the USSR for support.
Although Dulles believed that Nasser was only bluffing and that the USSR would not aid Nasser, he was wrong: the USSR promised Nasser a quantity of arms in exchange for a deferred payment of Egyptian grain and cotton. On 27 September 1955, Nasser announced an arms deal, with Czechoslovakia acting as a middleman for the Soviet support. Instead of attacking Nasser for turning to the Soviets, Dulles sought to improve relations with him. In December 1955, the US and the UK pledged $56 and $14 million, respectively, toward construction of the High Aswan Dam.
Though the Czech arms deal created an incentive for the US to invest at Aswan, the UK cited the deal as a reason for repealing its promise of dam funds. Dulles was angered more by Nasser's diplomatic recognition of China, which was in direct conflict with Dulles's policy of containment of communism.
Several other factors contributed to the US deciding to withdraw its offer of funding for the dam. Dulles believed that the USSR would not fulfill its commitment of military aid. He was also irritated by Nasser's neutrality and attempts to play both sides of the Cold War. At the time, other Western allies in the Middle East, including Turkey and Iraq, were resentful that Egypt, a persistently neutral country, was being offered so much aid.
In June 1956, the Soviets offered Nasser $1.12 billion at 2% interest for the construction of the dam. On 19 July the U.S. State Department announced that American financial assistance for the High Dam was "not feasible in present circumstances."
On 26 July 1956, with wide Egyptian acclaim, Nasser announced the nationalization of the Suez Canal that included fair compensation for the former owners. Nasser planned on the revenues generated by the canal to help fund construction of the High Dam. When the Suez War broke out, the United Kingdom, France, and Israel seized the canal and the Sinai. But pressure from the U.S. and the USSR at the United Nations and elsewhere forced them to withdraw.
In 1958, the USSR proceeded to provide support for the High Dam project.
In the 1950s, archaeologists began raising concerns that several major historical sites, including the famous temple of Abu Simbel were about to be submerged by waters collected behind the dam. A rescue operation began in 1960 under UNESCO (for details see below under Effects).
Despite its size, the Aswan project has not materially hurt the Egyptian balance of payments. The three Soviet credits covered virtually all of the project's foreign exchange requirements, including the cost of technical services, imported power generating and transmission equipment and some imported equipment for land reclamation. Egypt was not seriously burdened by payments on the credits, most of which were extended for 12 years with interest at the very low rate of 2-1/2%. Repayments to the USSR constituted only a small net drain during the first half of the 1960s, and increased export earnings derived from crops grown on newly reclaimed land have largely offset the modest debt service payments in recent years. During 1965–1970, these export earnings amounted to an estimated $126 million, compared with debt service payments of $113 million.
Construction and filling, 1960–1976
The Soviets also provided technicians and heavy machinery. The enormous rock and clay dam was designed by Nikolai Aleksandrovich Malyshev of the Moscow-based Hydroproject Institute, along with some Egyptian engineers. 25,000 Egyptian engineers and workers contributed to the construction of the dams.
Originally designed by West German and French engineers in the early 1950s and slated for financing with Western credits, the Aswan High Dam became the USSR's largest and most famous foreign aid project after the United States, the United Kingdom, and the International Bank for Reconstruction and Development (IBRD) withdrew their support in 1956. The first Soviet loan of $100 million to cover construction of coffer dams for diversion of the Nile was extended in 1958. An additional $225 million was extended in 1960 to complete the dam and construct power-generating facilities, and subsequently about $100 million was made available for land reclamation. These credits of some $425 million covered only the foreign exchange costs of the project, including salaries of Soviet engineers who supervised the project and were responsible for the installation and testing of Soviet equipment. Actual construction, which began in 1960, was done by Egyptian companies on contract to the High Dam Authority, and all domestic costs were borne by the Egyptians. Egyptian participation in the venture has raised the construction industry's capacity and reputation significantly.
On the Egyptian side, the project was led by Osman Ahmed Osman's Arab Contractors. The relatively young Osman underbid his only competitor by one-half.
1960: Start of construction on 9 January
1964: First dam construction stage completed, reservoir started filling
1970: The High Dam, as-Sad al-'Aali, completed on 21 July
1976: Reservoir reached capacity.
Specifications
The Aswan High Dam is long, wide at the base, wide at the crest and tall. It contains of material. At maximum, of water can pass through the dam. There are further emergency spillways for an extra , and the Toshka Canal links the reservoir to the Toshka Depression. The reservoir, named Lake Nasser, is long and at its widest, with a surface area of . It holds of water.
Irrigation scheme
Due to the absence of appreciable rainfall, Egypt's agriculture depends entirely on irrigation. With irrigation, two harvests per year are possible, except for sugar cane which has a growing period of almost one year.
The high dam at Aswan releases, on average, water per year, of which some are diverted into the irrigation canals.
In the Nile valley and delta, almost benefit from these waters producing on average 1.8 crops per year. The annual crop consumptive use of water is about . Hence, the overall irrigation efficiency is 38/46 = 0.826 or 83%. This is a relatively high irrigation efficiency. The field irrigation efficiencies are much less, but the losses are reused downstream. This continuous reuse accounts for the high overall efficiency.
The following table shows the distribution of irrigation water over the branch canals taking off from the one main irrigation canal, the Mansuriya Canal near Giza.
* Period 1 March to 31 July. 1 feddan is 0.42 ha or about 1 acre.
* Data from the Egyptian Water Use Management Project (EWUP)
The salt concentration of the water in the Aswan reservoir is about , a very low salinity level. At an annual inflow of , the annual salt influx reaches 14 million tons. The average salt concentration of the drainage water evacuated into the sea and the coastal lakes is . At an annual discharge of (not counting the of salt intrusion from the sea and the lakes, see figure "Water balances"), the annual salt export reaches 27 million ton. In 1995, the output of salt was higher than the influx, and Egypt's agricultural lands were desalinizing. Part of this could be due to the large number of subsurface drainage projects executed in the last decades to control the water table and soil salinity.
Drainage through subsurface drains and drainage channels is essential to prevent a deterioration of crop yields from waterlogging and soil salinization caused by irrigation. By 2003, more than have been equipped with a subsurface drainage system and approximately of water is drained annually from areas with these systems. The total investment cost in agricultural drainage over 27 years from 1973 to 2002 was about $3.1 billion covering the cost of design, construction, maintenance, research and training. During this period 11 large-scale projects were implemented with financial support from World Bank and other donors.
Effects
The High Dam has resulted in protection from floods and droughts, an increase in agricultural production and employment, electricity production, and improved navigation that also benefits tourism. Conversely, the dam flooded a large area, causing the relocation of over 100,000 people. Many archaeological sites were submerged while others were relocated. The dam is blamed for coastline erosion, soil salinity, and health problems.
The assessment of the costs and benefits of the dam remains controversial decades after its completion. According to one estimate, the annual economic benefit of the High Dam immediately after its completion was , $587 million using the exchange rate in 1970 of $2.30 per : from agricultural production, from hydroelectric generation, from flood protection, and from improved navigation. At the time of its construction, total cost, including unspecified "subsidiary projects" and the extension of electric power lines, amounted to . Not taking into account the negative environmental and social effects of the dam, its costs are thus estimated to have been recovered within only two years. One observer notes: "The impacts of the Aswan High Dam (...) have been overwhelmingly positive. Although the Dam has contributed to some environmental problems, these have proved to be significantly less severe than was generally expected, or currently believed by many people." Another observer disagreed and he recommended that the dam should be torn down. Tearing it down would cost only a fraction of the funds required for "continually combating the dam's consequential damage" and of fertile land could be reclaimed from the layers of mud on the bed of the drained reservoir. Samuel C. Florman wrote about the dam: "As a structure it is a success. But in its effect on the ecology of the Nile Basin – most of which could have been predicted – it is a failure".
Periodic floods and droughts have affected Egypt since ancient times. The dam mitigated the effects of floods, such as those in 1964, 1973, and 1988. Navigation along the river has been improved, both upstream and downstream of the dam. Sailing along the Nile is a favorite tourism activity, which is mainly done during the winter when the natural flow of the Nile would have been too low to allow navigation of cruise ships. A new fishing industry has been created around Lake Nasser, though it is struggling due to its distance from any significant markets. The annual production was about 35,000 tons in the mid-1990s. Factories for the fishing industry and packaging have been set up near the Lake.
According to a 1971 CIA declassified report, although the High Dam has not created ecological problems as serious as some observers have charged, its construction has brought economic losses as well as gains. These losses derive largely from the settling in dam's lake of the rich silt traditionally borne by the Nile. To date (1971), the main impact has been on the fishing industry. Egypt's Mediterranean catch, which once averaged 35,000–40,000 tons annually, has shrunk to 20,000 tons or less, largely because the loss of plankton nourished by the silt has eliminated the sardine population in Egyptian waters. Fishing in high dam's lake may in time at least partly offset the loss of saltwater fish, but only the most optimistic estimates place the eventual catch as high as 15,000–20,000 tons. Lack of continuing silt deposits at the mouth of the river also has contributed to a serious erosion problem. Commercial fertilizer requirements and salination and drainage difficulties, already large in perennially irrigated areas of Lower and Middle Egypt, will be somewhat increased in Upper Egypt by the change to perennial irrigation.
Drought protection, agricultural production and employment
The dams also protected Egypt from the droughts in 1972–1973 and 1983–1987 that devastated East and West Africa. The High Dam allowed Egypt to reclaim about 2.0 million feddan (840,000 hectares) in the Nile Delta and along the Nile Valley, increasing the country's irrigated area by a third. The increase was brought about both by irrigating what used to be desert and by bringing under cultivation of that were previously used as flood retention basins. About half a million families were settled on these new lands. In particular the area under rice and sugar cane cultivation increased. In addition, about 1 million feddan (420,000 hectares), mostly in Upper Egypt, were converted from flood irrigation with only one crop per year to perennial irrigation allowing two or more crops per year. On other previously irrigated land, yields increased because water could be made available at critical low-flow periods. For example, wheat yields in Egypt tripled between 1952 and 1991 and better availability of water contributed to this increase. Most of the 32 km3 of freshwater, or almost 40 percent of the average flow of the Nile that were previously lost to the sea every year could be put to beneficial use. While about 10 km3 of the water saved is lost due to evaporation in Lake Nasser, the amount of water available for irrigation still increased by 22 km3. Other estimates put evaporation from Lake Nasser at between 10 and 16 cubic km per year.
Electricity production
The dam powers twelve generators each rated at , with a total of . Power generation began in 1967. When the High Dam first reached peak output in 1970, it produced around half of Egypt's production of electric power (about 15 percent by 1998), and it gave most Egyptian villages the use of electricity for the first time. The High Dam has also improved the efficiency and the extension of the Old Aswan Hydropower stations by regulating upstream flows. At the time of completion, it was the largest power station in Africa and the 6th largest hydroelectric power station in the world.
All High Dam power facilities were completed ahead of schedule. Twelve turbines were installed and tested, giving the plant an installed capacity of 2,100 megawatts (MW), or more than twice the national total in 1960. With this capacity, the Aswan plant can produce 10 billion kWh of energy yearly. Two 500-kilovolt trunk lines to Cairo have been completed, and initial transmission problems, stemming mainly from poor insulators, were solved. Also, the damage inflicted on a main transformer station in 1968 by Israeli commandos has been repaired, and the Aswan plant is fully integrated with the power network in Lower Egypt. By 1971 estimation, power output at Aswan won't reach much more than half of the plant's theoretical capacity, because of limited water supplies and the differing seasonal water-use patterns for irrigation and power production. Agricultural demand for water in the summer far exceeds the amount needed to meet the comparatively low summer demand for electric power. Heavy summer irrigation use, however, will leave insufficient water under Egyptian control to permit hydroelectric power production at full capacity in the winter. Technical studies indicate that a maximum annual output of 5 billion kWh appears to be all that can be sustained due to fluctuations in Nile flows. . Aswan High Dam electricity production is expected to be impacted by upstream mega-dams during extended drought periods.
Resettlement and compensations
Lake Nasser flooded much of lower Nubia and 100,000 to 120,000 people were resettled in Sudan and Egypt.
In Sudan, 50,000 to 70,000 Sudanese Nubians were moved from the old town of Wadi Halfa and its surrounding villages. Some were moved to a newly created settlement on the shore of Lake Nasser called New Wadi Halfa, and some were resettled approximately south to the semi-arid Butana plain near the town of Khashm el-Girba up the Atbara River. The climate there had a regular rainy season as opposed to their previous desert habitat in which virtually no rain fell. The government developed an irrigation project, called the New Halfa Agricultural Development Scheme to grow cotton, grains, sugar cane and other crops. The Nubians were resettled in twenty five planned villages that included schools, medical facilities, and other services, including piped water and some electrification.
In Egypt, the majority of the 50,000 Nubians were moved three to ten kilometers from the Nile near Edna and Kom Ombo, downstream from Aswan in what was called "New Nubia". Housing and facilities were built for 47 village units whose relationship to each other approximated that in Old Nubia. Irrigated land was provided to grow mainly sugar cane.
In 2019–20, Egypt started to compensate the Nubians who lost their homes following the dam impoundment.
Archaeological sites
Twenty-two monuments and architectural complexes that were threatened by flooding from Lake Nasser, including the Abu Simbel temples, were preserved by moving them to the shores of the lake under the UNESCO Nubia Campaign. Also moved were Philae, Kalabsha and Amada.
These monuments were granted to countries that helped with the works:
The Debod temple to Madrid
The Temple of Dendur to the Metropolitan Museum of Art of New York
The Temple of Taffeh to the Rijksmuseum van Oudheden of Leiden
The Temple of Ellesyia to the Museo Egizio of Turin
These items were removed to the garden area of the Sudan National Museum of Khartoum:
The temple of Ramses II at Aksha
The temple of Hatshepsut at Buhen
The temple of Khnum at Kumma
The tomb of the Nubian prince Djehuti-hotep at Debeira
The temples of Dedwen and Sesostris III at Semna
The granite columns from the Faras Cathedral
A part of the paintings of the Faras Cathedral; the other part is in the National Museum of Warsaw.
The Temple of Ptah at Gerf Hussein had its free-standing section reconstructed at New Kalabsha, alongside the Temple of Kalabsha, Beit el-Wali, and the Kiosk of Qertassi.
The remaining archaeological sites, including the Buhen fort and the cemetery of Fadrus have been flooded by Lake Nasser.
Loss of sediments
Before the construction of the High Dam, the Nile deposited sediments of various particle size – consisting of fine sand, silt and clay – on fields in Upper Egypt through its annual flood, contributing to soil fertility. However, the nutrient value of the sediment has often been overestimated. 88 percent of the sediment was carried to the sea before the construction of the High Dam. The nutrient value added to the land by the sediment was only 6,000 tons of potash, 7,000 tons of phosphorus pentoxide and 17,000 tons of nitrogen. These amounts are insignificant compared to what is needed to reach the yields achieved today in Egypt's irrigation. Also, the annual spread of sediment due to the Nile floods occurred along the banks of the Nile. Areas far from the river which never received the Nile floods before are now being irrigated.
A more serious issue of trapping of sediment by the dam is that it has increased coastline erosion surrounding the Nile Delta. There is a lack of reliable statistics.
Waterlogging and increase in soil salinity
Before the construction of the High Dam, groundwater levels in the Nile Valley fluctuated per year with the water level of the Nile. During summer when evaporation was highest, the groundwater level was too deep to allow salts dissolved in the water to be pulled to the surface through capillary action. With the disappearance of the annual flood and heavy year-round irrigation, groundwater levels remained high with little fluctuation leading to waterlogging. Soil salinity also increased because the distance between the surface and the groundwater table was small enough (1–2 m depending on soil conditions and temperature) to allow water to be pulled up by evaporation so that the relatively small concentrations of salt in the groundwater accumulated on the soil surface over the years. Since most of the farmland did not have proper subsurface drainage to lower the groundwater table, salinization gradually affected crop yields. Drainage through sub-surface drains and drainage channels is essential to prevent a deterioration of crop yields from soil salinization and waterlogging. By 2003, more than 2 million hectares have been equipped with a subsurface drainage system at a cost from 1973 to 2002 of about $3.1 billion.
Health
Contrary to many predictions made prior to the Aswan High Dam construction and publications that followed, that the prevalence of schistosomiasis (bilharzia) would increase, it did not. This assumption did not take into account the extent of perennial irrigation that was already present throughout Egypt decades before the high dam closure. By the 1950s only a small proportion of Upper Egypt had not been converted from basin (low transmission) to perennial (high transmission) irrigation. Expansion of perennial irrigation systems in Egypt did not depend on the high dam. In fact, within 15 years of the high dam closure there was solid evidence that bilharzia was declining in Upper Egypt. S. haematobium has since disappeared altogether. Suggested reasons for this include improvements in irrigation practice. In the Nile Delta, schistosomiasis had been highly endemic, with prevalence in the villages 50% or higher for almost a century before. This was a consequence of the conversion of the Delta to perennial irrigation to grow long staple cotton by the British. This has changed. Large-scale treatment programmes in the 1990s using single-dose oral medication contributed greatly to reducing the prevalence and severity of S. mansoni in the Delta.
Other effects
Sediment deposited in the reservoir is lowering the water storage capacity of Lake Nasser. The reservoir storage capacity is 162 km3, including 31 km3 dead storage at the bottom of the lake below above sea level, 90 km3 live storage, and 41 km3 of storage for high flood waters above above sea level. The annual sediment load of the Nile is about 134 million tons. This means that the dead storage volume would be filled up after 300–500 years if the sediment accumulated at the same rate throughout the area of the lake. Obviously sediment accumulates much faster at the upper reaches of the lake, where sedimentation has already affected the live storage zone.
Before the construction of the High Dam, the of irrigation and drainage canals in Egypt had to be dredged regularly to remove sediments. After construction of the dam, aquatic weeds grew much faster in the clearer water, helped by fertilizer residues. The total length of the infested waterways was about in the mid-1990s. Weeds have been gradually brought under control by manual, mechanical and biological methods.
Mediterranean fishing and brackish water lake fishery declined after the dam was finished because nutrients that flowed down the Nile to the Mediterranean were trapped behind the dam. For example, the sardine catch off the Egyptian coast declined from 18,000 tons in 1962 to a mere 460 tons in 1968, but then gradually recovered to 8,590 tons in 1992. A scientific article in the mid-1990s noted that "the mismatch between low primary productivity and relatively high levels of fish production in the region still presents a puzzle to scientists."
A concern before the construction of the High Dam had been the potential drop in river-bed level downstream of the Dam as the result of erosion caused by the flow of sediment-free water. Estimates by various national and international experts put this drop at between and . However, the actual drop has been measured at , much less than expected.
The red-brick construction industry, which consisted of hundreds of factories that used Nile sediment deposits along the river, has also been negatively affected. Deprived of sediment, they started using the older alluvium of otherwise arable land taking out of production up to annually, with an estimated destroyed by 1984 when the government prohibited, "with only modest success," further excavation. According to one source, bricks are now being made from new techniques which use a sand-clay mixture and it has been argued that the mud-based brick industry would have suffered even if the dam had not been built.
Because of the lower turbidity of the water sunlight penetrates deeper in the Nile water. Because of this and the increased presence of nutrients from fertilizers in the water, more algae grow in the Nile. This in turn increases the costs of drinking water treatment. Apparently few experts had expected that water quality in the Nile would actually decrease because of the High Dam.
| Technology | Hydraulic infrastructure | null |
191304 | https://en.wikipedia.org/wiki/Schistosomiasis | Schistosomiasis | Schistosomiasis, also known as snail fever, bilharzia, and Katayama fever, is a disease caused by parasitic flatworms called schistosomes. It affects the urinary tract or the intestines. Symptoms include abdominal pain, diarrhea, bloody stool, or blood in the urine. Those who have been infected for a long time may experience liver damage, kidney failure, infertility, or bladder cancer. In children, schistosomiasis may cause poor growth and learning difficulties.
Schistosomiasis is spread by contact with fresh water contaminated with parasites. These parasites are released from infected freshwater snails. The disease is especially common among children in underdeveloped and developing countries because they are more likely to play in contaminated water. Schistosomiasis is also common among women, who may have greater exposure through daily chores that involve water, such as washing clothes and fetching water. Other high-risk groups include farmers, fishermen, and people using unclean water during daily living. Schistosomiasis belongs to the group of helminth infections. Diagnosis is made by finding the parasite’s eggs in a person's urine or stool. It can also be confirmed by finding antibodies against the disease in the blood.
Methods of preventing the disease include improving access to clean water and reducing the number of snails. In areas where the disease is common, the medication praziquantel may be given once a year to the entire group. This is done to decrease the number of people infected, and consequently, the spread of the disease. Praziquantel is also the treatment recommended by the World Health Organization (WHO) for those who are known to be infected.
In 2019, schistosomiasis impacted approximately 236.6 million individuals across the globe. Each year, it is estimated that between 4,400 and 200,000 individuals succumb to it. The illness predominantly occurs in regions of Africa, Asia, and South America. Approximately 700 million individuals across over 70 nations reside in regions where the disease is prevalent. In tropical regions, schistosomiasis ranks as the second most economically significant parasitic disease, following malaria. Schistosomiasis is classified as a neglected tropical disease.
Signs and symptoms
Many individuals do not experience symptoms. If symptoms do appear, they usually take 4–6 weeks from the time of infection. The first symptom of the disease may be a general feeling of illness. Within 12 hours of infection, an individual may complain of a tingling sensation or light rash, commonly referred to as "swimmer's itch", due to irritation at the point of entrance. The rash that may develop can mimic scabies and other rashes.
The manifestation of a schistosomal infection varies over time as the larval form of the parasite cercariae and later adult worms and their eggs migrate through the body. If eggs migrate to the brain or spinal cord, seizures, paralysis, or spinal-cord inflammation are possible.
Acute Infection
Manifestation of acute infection from schistosoma includes cercarial dermatitis (hours to days) and acute systemic schistosomiasis (2–8 weeks) which can include symptoms of fever, myalgia, a cough, bloody diarrhea, chills, or lymph node enlargement. Some patients may also experience dyspnea and hypoxia associated with the development of pulmonary infiltrates.
Cercarial dermatitis
The first potential reaction is an itchy, maculopapular rash that results from cercariae penetrating the skin within the first 12 hours to days of cercarial skin penetration. The first time a non-sensitized person is exposed, the rashes are usually mild with an associated prickling sensation that quickly disappear on its own since this is a type of hypersensitivity reaction. In sensitized people who have previously been infected, the rash can develop into itchy, red, raised lesions (papules) with some turning into fluid-filled lesions (vesicles). Previous infections with cercariae causes a faster developing and worse presentation of dermatitis due to the stronger immune response. The round bumps are usually one to three centimeters across. Because people living in affected areas have often been repeatedly exposed, acute reactions are more common in tourists and migrants. The rash can occur between the first few hours and a week after exposure, and they normally resolve on their own in around 7–10 days. For human schistosomiasis, a similar type of dermatitis called "swimmer's itch" can also be caused by cercariae from animal trematodes that often infect birds. Cercarial dermatitis is not contagious and can not be transmitted from person-to-person.
Symptoms may include:
Flat, red rash
Small red, raised pimples
Small red blisters
Prickling or tingling sensation, burning, itching of the skin
Scratching the rash can lead to secondary bacterial infection of the skin, thus it is important to refrain from scratching. Some common treatments for itching include corticosteroid cream, anti-itch lotion, application of cool compresses to rash, bathing in Epsom salts or baking soda, and in severe itching cases, prescription strength cream and lotions. Oral antihistamines can also help relieve the itching.
Acute schistosomiasis (Katayama fever)
Acute schistosomiasis (Katayama fever) may occur weeks or months (around 2–8 weeks) after the initial infection as a systemic reaction against migrating schistosomulae as they pass through the bloodstream through the lungs to the liver, and also against the antigens of eggs. Similarly to swimmer's itch, Katayama fever is more commonly seen in people with their first infection such as migrants and tourists, and it is associated with heavy infection. It is seen, however, in native residents of China infected with S. japonicum. S. japonicum can cause acute schistosomiasis in chronically infected population, and it can lead to a more severe form of acute schistosomiasis.
Symptoms may include:
Dry cough with changes on chest X-ray
Fever
Fatigue
Muscle aches
Headache
Malaise
Abdominal pain
Diarrhea
Enlargement of both the liver and the spleen
Hives
Acute schistosomiasis usually self-resolves in 2–8 weeks in most cases, but a small proportion of people have persistent weight loss, diarrhea, diffuse abdominal pain, and rash.
Neurological complications may include:
Spinal cord inflammation (transverse myelitis) may occur if worms or eggs travel to the spinal cord during this acute phase of infection.
Headaches
Disturbances of sensorium
Hemiplegia
Tetraplegia
Visual impairment
Ataxia/ speech impairment
Motor paralysis
Cardiac complications may include:
Myocarditis
Pericarditis
Asymptomatic myocardial ischemia
Treatment may include:
Corticosteroid such as prednisone is used to alleviate the hypersensitivity reaction and reduce inflammation.
Praziquantel can be administered to kill adult schistosomes and prevent chronic infection in addition to corticosteroid therapy. It is ineffective for recent infections as it only targets adult worms rather than premature schistosomulae. Therefore, a repeat praziquantel treatment several weeks after initial infection may be warranted. It is recommended to treat with praziquantel 4–6 weeks after initial exposure since it targets adult worms. For acute schistosomiasis (AS), praziquantel is ineffective on schistosomulae after 7 days and does not prevent the chronic phase of the disease. Too early treatment can worsen symptoms of AS. In some cases, this worsening of symptoms can be life-threatening by causing encephalitis related to vasculitis, myocarditis, or pulmonary events.
Oxamniquine (50 mg/kg once) can be administered in the early phase of schistosomiasis. It is more effective against schistsomulae than praziquantel, but only with S. mansoni. This prevents the chronic S. mansoni infection and egg-laying stages.
Artemether is an artemisin derivative efficient against schistosomulae aged 7–21 days, but only reduces S. mansoni infection by 50% in exposed children.
Chronic infection
In long-established disease, adult worms lay eggs that can cause inflammatory reactions. The eggs secrete proteolytic enzymes that help them migrate to the bladder and intestines to be shed. The enzymes also cause an eosinophilic inflammatory reaction when eggs get trapped in tissues or embolize to the liver, spleen, lungs, or brain. The long-term manifestations are dependent on the species of schistosome, as the adult worms of different species migrate to different areas. Many infections are mildly symptomatic, with anemia and malnutrition being common in endemic areas.
Intestinal schistosomiasis
The worms of S. mansoni and S. japonicum migrate to the gastrointestinal tract and liver veins. Eggs in the gut wall can lead to pain, blood in the stool, and diarrhea (especially in children). Severe disease can lead to narrowing of the colon or rectum.
In intestinal schistosomiasis, eggs become lodged in the intestinal wall during their migration from the mesenteric venules to the intestinal lumen, and the trapped eggs cause an immune system reaction called a granulomatous reaction. They mostly affect the large bowel and rectum, and involvement of the small bowel is more rare. This immune response can lead to colonic obstruction and blood loss. The infected individual may have what appears to be a potbelly. There is a strong correlation between the morbidity of intestinal schistosomiasis and the intensity of infection. In light infections, symptoms may be mild and can go unrecognized. The most common species to cause intestinal schistosomiasis are S. mansoni and S. japonicum, however, S. mekongi and S. intercalatum can also cause this disease.
Symptoms may include:
Abdominal pain and discomfort
Loss of appetite
Mucous diarrhea with or without gross blood
Blood in feces that is not visibly present (fecal occult blood)
Abdominal distention
Complications may include:
Intestinal polyps
Intestinal ulcers
Iron-deficient anemia
Fistula
Bowel strictures (narrowing of colon or rectum)
Protein-losing enteropathy
Partial or complete bowel obstruction
Appendicitis (rare)
Approximately 10-50% of people living in endemic regions of S. mansoni and S. japonicum develop intestinal schistosomiasis. S. mansoni infection epidemiologically overlaps with high HIV prevalence in Sub-Saharan Africa, where gastrointestinal schistosomiasis has been linked to increased HIV transmission.
Hepatosplenic schistosomiasis
Eggs also migrate to the liver, leading to fibrosis in 4 to 8% of people with chronic infection, mainly those with long-term heavy infection.
Eggs can become lodged in the liver, leading to portal hypertension, splenomegaly, the buildup of fluid in the abdomen, and potentially life-threatening dilations or swollen areas in the esophagus or gastrointestinal tract that can tear and bleed profusely (esophageal varices). This condition can be separated into two phases: inflammatory hepatic schistosomiasis (early inflammatory reaction) and chronic hepatic schistosomiasis. Most common species to cause this condition are S. mansoni, S. japonicum, and S. mekongi.
Inflammatory hepatic schistosomiasis
This condition occurs mainly in children and adolescents due to early immune reaction to eggs trapped within the periportal and presinusoidal spaces of the liver creating numerous granulomas. Liver function is unaffected, and the severity of liver and spleen enlargement is correlated to the intensity of the infection. It is characterized by an enlarged left lobe of the liver with a sharp edge and an enlarged spleen with nodules. The liver and spleen enlargement is usually mild, but in severe cases, they can enlarge to the level of the belly button and even into the pelvis.
Chronic (fibrotic) hepatic schistosomiasis
This is a late-stage liver disease that occurs mainly in young and middle-aged adults who have been chronically infected with a heavy infection and whose immune regulation of fibrosis is not functioning properly. It affects only a small proportion of infected people. Liver function and liver architecture are not affected unlike cirrhosis. The pathogenesis of this disease is caused by deposition of collagen and extracellular matrix proteins within the periportal space, which leads to liver portal fibrosis and enlarged fibrotic portal tracts (Symmer's pipe stem fibrosis). The periportal fibrosis physically compress the portal vein leading to portal hypertension (increased portal venous pressure), increased pressure of the splenic vein, and subsequent enlargement of the spleen. Portal hypertension can also increase the pressure in portosystemic anastomoses (vessel connections between the portal circulation and systemic circulation) leading to esophageal varices and caput medusae. These portosystemic anastomoses also allows a pathway for the eggs to travel to locations such as the lungs, spinal cord, or brain. Co-infection with hepatitis is common in regions endemic to schistosomiasis with hepatitis B or hepatitis C, and co-infection with hepatitis C is associated with more rapid liver deterioration and worse outcomes. Fibrotic hepatic schistosomiasis caused by S. mansoni usually develops in around 5–15 years, while it can take less time for S. japonicum.
Symptoms may include:
Esophageal varices (can cause life-threatening esophageal variceal bleed)
Ascites (end-stage)
Caput medusae
Enlarged spleen and liver
Complications may include:
Neuroschistosomiasis due to portosystemic anastomoses from portal hypertension
Pulmonary schistosomiasis due to portosystemic anastomoses from portal hypertension
Pulmonary schistosomiasis
Portal hypertension secondary to hepatosplenic schistosomiasis can cause vessel connections between the portal (liver and gut) circulation and systemic circulation to develop, which creates a pathway for the eggs and worms to travel to the lungs. The eggs can be deposited around the alveolar capillary beds and cause granulomatous inflammation of the pulmonary arterioles followed by fibrosis. This leads to high blood pressure in the pulmonary circulation system (pulmonary hypertension), increased pressure in the right heart, enlargement of the pulmonary artery and right atria, and thickening of the right ventricular wall.
Symptoms of pulmonary hypertension may include:
Shortness of breath
Chest pain
Feeling tired
Fainting during physical exertion
Urogenital schistosomiasis
The worms of S. haematobium migrate to the veins around the bladder and ureters where they reproduce. S. haematobium can produce up to 3000 eggs per day, these eggs migrate from the veins to the bladder and ureter lumens, but up to 50 percent of them can become trapped in the surrounding tissues causing granulomatous inflammation, polyps formation, and ulceration of bladder, ureter, and genital tract tissues. This can lead to blood in the urine 10 to 12 weeks after infection. Over time, fibrosis can lead to obstruction of the urinary tract, hydronephrosis, and kidney failure. Bladder cancer diagnosis and mortality are generally elevated in affected areas; efforts to control schistosomiasis in Egypt have led to decreases in the bladder cancer rate. The risk of bladder cancer appears to be especially high in male smokers, perhaps due to chronic irritation of the bladder lining allowing it to be exposed to carcinogens from smoking.
In women, the genitourinary disease can also include genital lesions that may lead to increased rates of HIV transmission. If lesions involve the fallopian tubes or ovaries, it may lead to infertility. If the reproductive organs in males are affected, there could be blood in the sperm.
Urinary symptoms may include:
Blood in the urine - blood is usually seen at the end of a urine stream (most common symptom)
Painful urination
Increase frequency of urination
Protein in the urine
Secondary urinary tract infection
Secondary kidney infection
Calcification of the bladder wall
Genital symptoms may include:
Inflammation and ulceration of uterine cervix, vagina, or vulva
Blood in the sperm
Infertility in female
Kidney function is unaffected in many cases, and the lesions are reversible with proper treatment to eliminate the worms.
Neuroschistosomiasis
Central nervous system lesions occur occasionally due to inflammation and granuloma development around eggs or worms that migrate to the brain or spinal cord through the circulatory system. They can potentially develop irreversible scarring without proper treatment. Cerebral granulomatous disease may be caused by S. japonicum eggs in the brain during the acute and chronic phases of the disease. Communities in China affected by S. japonicum have rates of seizures eight times higher than baseline. Cerebral granulomatous infection may also be caused by S. mansoni. In situ egg deposition following the anomalous migration of the adult worm, which appears to be the only mechanism by which Schistosoma can reach the central nervous system in people with schistosomiasis. The destructive action on the nervous tissue and the mass effect produced by a large number of eggs surrounded by multiple, large granulomas in circumscribed areas of the brain characterize the pseudotumoral form of neuroschistosomiasis and are responsible for the appearance of clinical manifestations: headache, hemiparesis, altered mental status, vertigo, visual abnormalities, seizures, and ataxia. Similarly, granulomatous lesions from S. mansoni and S. haematobium eggs in the spinal cord can lead to transverse myelitis (inflammation of the spinal cord) with flaccid paraplegia. In cases with advanced hepatosplenic and urinary schistosomiasis, the continuous embolization of eggs from the portal mesenteric system (S. mansoni) or portal mesenteric-pelvic system (S. haematobium) to the brain, results in a sparse distribution of eggs associated with scant periovular inflammatory reaction, usually with little or no clinical significance.
Spinal cord inflammation (transverse myelitis) symptoms may include:
Paralysis of the lower extremities
Loss of bowel or urinary control
Loss of sensation below the level of the lesion
Pain below the level of the lesion
Cerebral granulomatous infection symptoms may include:
Seizures
Headaches
Motor impairment
Sensory impairment
Cerebellar symptoms
Unsteady gait
Inability to stand or sit without support
Uncoordinated movements
Scanning speech
Irregular eye movements
Corticosteroids are used to prevent permanent neurological damage from the inflammatory response to the eggs, and sometimes anticonvulsants are needed to stop the seizures. Corticosteroids are given prior to administration of praziquantel.
Transmission and life cycle
Individuals infected with Schistosoma release eggs into water via their feces or urine. A collection of stool samples under a microscope will show the eggs of S. intercalatum, S. mansori, and S. japonicum. Looking at a urine sample under a microscope would reveal the eggs of S. haematobium and rarely, the eggs of S. mansori. After larvae hatch from these eggs, the larvae infect a very specific type of freshwater snail. For example, in S. haematobium and S. intercalatum it is snails of the genus Bulinus, in S. mansoni it is Biomphalaria, and in S. japonicum it is Oncomelania. The schistosome larvae undergo the next phase of their lifecycles in these snails, spending their time reproducing and developing. Once this step has been completed, the parasite leaves the snail and enters the water column. The parasite can live in the water for only 48 hours without a mammalian host. Once a host has been found, the worm enters its blood vessels. For several weeks, the worm remains in the vessels, continuing its development into its adult phase. When maturity is reached, mating occurs and eggs are produced. Eggs enter the bladder/intestine and are excreted through urine and feces and the process repeats. If the eggs are not excreted, they can become engrained in the body tissues and cause myriad problems such as immune reactions and organ damage. While transmission typically occurs only in countries where the freshwater snails are native, a case in Germany was reported where a man got schistosomiasis from an infected snail in his aquarium.
Humans encounter larvae of the schistosome parasite when they enter contaminated water while bathing, playing, swimming, washing, fishing, or walking through the water.
Life cycle
The life cycle stages:
The excretion of schistosome eggs in urine or feces depending on the species
The hatching of the eggs leads to the release of the free-swimming, ciliated larvae called miracidia
Miracidia find and penetrate the snails, which are the intermediate hosts (specific species of snails are dependent on the species of Schistosoma)
Within the snails, two successive generations of sporocysts occur
Sporocysts give rise to the infective free-swimming larvae with forked tails called cercariae, and they leave the snails to enter the water
Cercariae find the human hosts and penetrate their skin
Upon entrance into the human hosts, cercariae lose their tails and become schistosomulae
The schistosomulae travel to the lungs and heart via the venous circulation
They migrate to the portal venous system of the liver where they mature into the adult form with two separate sexes
The adult male and female are paired together, exit the liver via the portal venous system, travel to the venous systems of the intestines or bladder (species dependent), and produce eggs
S. japonicum - superior mesenteric veins (but can also inhabit inferior mesenteric veins)
S. mansoni - inferior mesenteric veins (but can also inhabit superior mesenteric veins)
S. haematobium - vesicular and pelvic venous plexus of the bladder (occasionally rectal venules)
S. intercalatum and S. guineensis - inferior mesenteric plexus (lower portion of the bowels compared to S. mansoni)
Schistosomes can live an average of 3–5 years, and the eggs can survive for more than 30 years after infection.
Other hosts
Schistosomiasis is also a concern of cattle husbandry and mice. O-methylthreonine is weakly effective in mouse schistosomiasis but is not in use.
Pathogenesis
The infectious stage starts when the free-swimming larval form of the schistosome, cercariae, penetrates the human skin using their suckers, proteolytic enzymes, and tail movements; the cercariae transformed into schistosomulae by losing its tail and subsequently travels to the heart and lungs through the venous system until it eventually reaches the liver where it will mature into the adult form. The diseases caused by the schistomes are characterized into acute schistosomiasis and chronic schistosomiasis, and can vary dependent on the species of schistosome.
Acute infection
Minutes to days after initial infection:
Cercerial dermatitis (Swimmer's itch) - swimmer's itch is caused by a localized allergic reaction at the sites of skin penetration by the cercariae causing an inflammatory reaction that is characterized by itchy red pimples and blisters.
A few weeks to months after the initial infection:
Acute Schistosomiasis (Katayama's Fever) - the exact pathophysiology of this disease remains unknown. It has been hypothesized to be caused by a systemic immune response due to immune complex formation (Type III hypersensivity) with the foreign antigens on the migratory schistosomula and the eggs and the subsequent deposition of these complexes on various tissues leading to activation of an autoimmune response. Acute schistosomiasis caused by S. mansoni and S. haematobium generally affect people who have been infected for the first time such as tourists visiting endemic regions. In contrast, cases of acute schistosomiasis caused by S. japonicum can occur in reinfection to population who reside in endemic regions, and they occur in higher incidences and can have worse prognosis. It was proposed that the large amount of egg antigens released by S. japonicum interact with antibodies leading to the formation of a high volume of immune complexes, which cause enlargement of the lymph tissues. This sequence of events can lead to clinical manifestation of fever, enlargement of the spleen and liver due to fibrosis, portal hypertension, and death.
Chronic infection
The clinical manifestations of chronic infection are mainly caused by immune reaction to the eggs' entrapment within tissues resulting in granuloma formation and chronic inflammation. Adult worms live together in pairs (one male and female), sexually reproduce, and lay eggs in the veins around the intestines and bladder depending on the species, and these eggs can rupture the wall of the veins to escape to the surrounding tissues. The eggs make their way through the tissues to the intestinal or bladder lumen with proteolytic enzymes, however, a large number of eggs are unable to finish their journey and remained stuck within the tissues where they can elicit an immune response. The miracidia in these eggs can release antigens that stimulate an inflammatory immune response. The miracidia within the eggs live for around 6–8 weeks before they die and stop releasing the antigens. The granulomatous response is a cellular immune response mediated by CD4+ T cells, neutrophils, eosinophils, lymphocytes, macrophages, and monocytes, and this chronic inflammatory response elicited by the eggs can cause fibrosis, tissue destruction, and granuloma nodules that disrupt the functions of the organs involved. Th1 helper cell response is prominent releasing cytokines such as IFN-γ during the early phases of infection, and it transitions to Th2 response leading to increase in level of IgE, IL-4, and eosinophils as egg production progresses. In chronic infections, the Th2 response shifts to increasing the level of IL-10, IL-13, and IgG4, which reverses the progression of the granulomas and leads to collagen deposition at the sites of the granulomas. The specific clinical symptoms and severity of the disease this causes depends on the type of schistosome infection, duration of infection, number of eggs, and the organ at which the eggs are deposited. The number of eggs entrapped in the tissues will continue to increase if the Schistosoma are not eliminated.
Diagnosis
Identification of eggs in stools
Diagnosis of infection is confirmed by the identification of eggs in stools. Eggs of S. mansoni are about 140 by 60 μm in size and have a lateral spine. The diagnosis is improved through the use of the Kato-Katz technique, a semiquantitative stool examination technique. Other methods that can be used are enzyme-linked immunosorbent assay, circumoval precipitation test, and alkaline phosphatase immunoassay.
Microscopic identification of eggs in stool or urine is the most practical method for diagnosis. A stool examination should be performed when infection with S. mansoni or S. japonicum is suspected, and a urine examination should be performed if S. haematobium is suspected. Eggs can be present in the stool in infections with all Schistosoma species. The examination can be performed on a simple smear (1 to 2 mg of fecal material). Because eggs may be passed intermittently or in small numbers, their detection is enhanced by repeated examinations or concentration procedures, or both. In addition, for field surveys and investigational purposes, the egg output can be quantified by using the Kato-Katz technique (20 to 50 mg of fecal material) or the Ritchie technique. Eggs can be found in the urine in infections with S. haematobium (recommended time for collection: between noon and 3 PM) and with S. japonicum. Quantification is possible by using filtration through a nucleopore filter membrane of a standard volume of urine followed by egg counts on the membrane. Tissue biopsy (rectal biopsy for all species and biopsy of the bladder for S. haematobium) may demonstrate eggs when stool or urine examinations are negative.
Confirming microhematuria in urine using urine reagent strips is more accurate than circulating antigen tests to identify active schistosomiasis in endemic areas.
Antibody detection
Antibody detection can be useful to indicate schistosome infection in people who have traveled to areas where schistosomiasis is common and in whom eggs cannot be demonstrated in fecal or urine specimens. Test sensitivity and specificity vary widely among the many tests reported for the serologic diagnosis of schistosomiasis and depend on the type of antigen preparations used (crude, purified, adult worm, egg, cercarial) and the test procedure.
At the U.S. Centers for Disease Control and Prevention, a combination of tests with purified adult worm antigens is used for antibody detection. All serum specimens are tested by FAST-ELISA using S. mansoni adult microsomal antigen. A positive reaction (greater than 9 units/μL serum) indicates infection with Schistosoma species. Sensitivity for S. mansoni infection is 99%, 95% for S. haematobium infection, and less than 50% for S. japonicum infection. The specificity of this assay for detecting schistosome infection is 99%. Because test sensitivity with the FAST-ELISA is reduced for species other than S. mansoni, immunoblots of the species appropriate to the person's travel history are also tested to ensure detection of S. haematobium and S. japonicum infections. Immunoblots with adult worm microsomal antigens are species-specific, so a positive reaction indicates the infecting species. The antibody's presence only indicates that schistosome infection occurred at some time and cannot be correlated with clinical status, worm burden, egg production, or prognosis. Where a person has traveled can help determine which Schistosoma species to test for by immunoblot.
In 2005, a field evaluation of a novel handheld microscope was undertaken in Uganda for the diagnosis of intestinal schistosomiasis by a team led by Russell Stothard from the Natural History Museum of London, working with the Schistosomiasis Control Initiative, London.
Molecular diagnostics
Polymerase chain reaction (PCR) based testing is accurate and rapid. However, it is not frequently used in countries where the disease is common due to the cost of the equipment and the technical expertise required to run the tests. Using a microscope to detect eggs costs about US$0.40 per test whereas PCR is about $US 7 per test as of 2019. Loop-mediated isothermal amplification are being studied as they are lower cost. LAMP testing is not commercially available as of 2019.
Laboratory testing
S. haematobium screening in the community can be done by using urine dip-stick to check for hematuria, and the stool guaiac test can be used to test for blood in the stool for potential S. mansoni and S. japonicum infection. For travelers or migrants in endemic regions, complete blood count with differential can be used to identify a high level of eosinophil in the blood, which could be indicative of an acute infection. Liver function test can be ordered if hepatosplenic schistosomiasis is suspected, and a subsequent hepatitis test panel can be ordered if liver function test is abnormal.
Like most parasitic infections, schistosomiasis will usually cause significant eosinophilia that can be identified on a complete blood count with differential.
Tissue biopsy
If other diagnostic methods of schistosomiasis have failed to detect the infection, but there is still a high suspicion for schistosomiasis, tissue biopsy from the rectum, bladder, and liver can be obtained to look for schistosome eggs within the tissue samples.
Imaging
Imaging modalities such as X-rays, ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) can be utilized to evaluate for severity of schistosomiasis and damages of the infected organs. For example, X-ray and CT scans of the chest can be used to detect lesions in the lungs from pulmonary schistosomiasis, and pelvic X-ray can reveal calcification of the bladder in chronic urinary schistosomiasis. Ultrasound may be used to look for abnormalities in the liver and spleen in hepatosplenic schistosomiasis, and CT scan of the liver is a good tool to look for calcification in the liver associated with S. japonicum infection. CT scan can also be used to assess damages from the schistosomiasis infection in the intestinal, urogenital, and central nervous system. MRI is used to evaluate schistosomiasis of the central nervous system, liver, and genital.
PET/CT scans that identify tissues with higher metabolic activity have been used to help diagnose schistosomiasis in rare cases. This is due to the high level of inflammation caused by the schistosomal eggs, which increases the metabolic rate of the surrounding tissues.
Prevention
Many countries are working towards eradicating the disease. The World Health Organization is promoting these efforts. In some cases, urbanization, pollution, and the consequent destruction of snail habitat have reduced exposure, with a subsequent decrease in new infections. The elimination of snail populations using molluscicides had been attempted to prevent schistosomiasis in the past, but it was an expensive process that often only reduced but did not eliminate the snail population. The drug praziquantel is used for prevention in high-risk populations living in areas where the disease is common. The Centers for Disease Control and Prevention advises avoiding drinking or coming into contact with contaminated water in areas where schistosomiasis is common.
A 2014 review found tentative evidence that increasing access to clean water and sanitation reduces schistosome infection.
Other important preventive measures include hygiene education leading to behavioral change and sanitary engineering to ensure a safe water supply.
Preventive anthelminthic administration
For schistosomiasis control, the World Health Organization recommends preventive anthelminthic administration, which is the treatment of an entire affected population and the periodic treatment of all groups at high risk of acquiring schistosomiasis by using Praziquantel. In 2019, 44.5% of people with schistosomiasis were treated globally, and 67.2% of school-aged children needing preventive chemotherapy received treatment.
Snails, dams, and prawns
For many years from the 1950s onwards, vast dams and irrigation schemes were constructed, causing a massive rise in water-borne infections from schistosomiasis. The detailed specifications in various United Nations documents since the 1950s could have minimized this problem. Irrigation schemes can be designed to make it hard for the snails to colonize the water and to reduce contact with the local population. Even though guidelines on how to design these schemes to minimise the spread of the disease had been published years before, the designers were unaware of them. The dams appear to have reduced the population of the large migratory prawn Macrobrachium, which eats the snails. After the construction of fourteen large dams, greater increases in schistosomiasis occurred in the historical habitats of native prawns than in other areas. Further, at the 1986 Diama Dam on the Senegal River, restoring prawns upstream of the dam reduced both snail density and the human schistosomiasis reinfection rate.
Integrated strategy in China
In China, the national strategy for schistosomiasis control has shifted three times since it was first initiated: transmission control strategy (from mid-1950s to early 1980s), morbidity control strategy (from mid-1980s to 2003), and the "new integrated strategy" (2004 to present). The morbidity control strategy focused on synchronous chemotherapy for humans and bovines and the new strategy developed in 2004 intervenes in the transmission pathway of schistosomiasis, mainly including replacement of bovines with machines, prohibition of grazing cattle in the grasslands, improving sanitation, installation of fecal-matter containers on boats, praziquantel drug therapy, snail control, and health education. A 2018 review found that the "new integrated strategy" was highly effective in reducing the rate of S. japonicum infection in both humans and the intermediate host snails and reduced the infection risk by 3–4 times relative to the conventional strategy.
Treatment
Two drugs, praziquantel and oxamniquine, are available for the treatment of schistosomiasis. They are considered equivalent in relation to efficacy against S. mansoni and safety. Because of praziquantel's lower cost per treatment, and oxaminiquine's lack of efficacy against the urogenital form of the disease caused by S. haematobium, in general praziquantel is considered the first option for treatment. Praziquantel can be safely used in pregnant women and young children. The treatment objective is to cure the disease and prevent the progression of the acute to chronic form of the disease. All cases of suspected schistosomiasis should be treated regardless of presentation because the adult parasite can live in the host for years.
Schistosomiasis is treatable by taking a single dose of the drug praziquantel by mouth annually.
Praziquantel eliminates the adult schistosomes but does not kill the eggs and immature worms. Live eggs can be excreted by the infected individuals for weeks after treatment with praziquantel. The immature worms can survive and grow into adult schistosomes after praziquantel therapy. Thus, it is important to have repeated schistosomiasis testing of the stool and/or urine around 4–6 weeks after praziquantel therapy. Praziquantel treatment may be repeated to ensure complete parasite elimination.
The WHO has developed guidelines for community treatment based on the impact the disease has on children in villages in which it is common:
When a village reports more than 50 percent of children have blood in their urine, everyone in the village receives treatment.
When 20 to 50 percent of children have bloody urine, only school-age children are treated.
When fewer than 20 percent of children have symptoms, mass treatment is not implemented.
Other possible treatments include a combination of praziquantel with metrifonate, artesunate, or mefloquine. A Cochrane review found tentative evidence that when used alone, metrifonate was as effective as praziquantel. Mefloquine, which has previously been used to treat and prevent malaria, was recognised in 2008–2009 to be effective against schistosomes.
Historically, antimony potassium tartrate remained the treatment of choice for schistosomiasis until the development of praziquantel in the 1980s.
Post-treatment monitoring
Osteopontin (OPN) is a promising tool for monitoring praziquantel efficacy and post-treatment fibrosis regression as (OPN) expression is modulated by S. mansoni egg antigens and its levels correlate with the severity of schistosomiasis fibrosis and portal hypertension in mice and humans. Praziquantel pharmacotherapy reduces systemic OPN levels and liver collagen content in mice.
Epidemiology
The disease is found in tropical countries in Africa, the Caribbean, eastern South America, Southeast Asia, and the Middle East. S. mansoni is found in parts of South America and the Caribbean, Africa, and the Middle East; S. haematobium in Africa and the Middle East; and S. japonicum in the Far East. S. mekongi and S. intercalatum are found locally in Southeast Asia and central West Africa, respectively.
The disease is endemic in about 75 developing countries and mainly affects people living in rural agricultural and peri-urban areas.
Infection estimates
In 2010, approximately 238 million people were infected with schistosomiasis, 85 percent of whom live in Africa. An earlier estimate from 2006 had put the figure at 200 million people infected. As of the latest WHO record, 236.6 million people were infected in 2019. In many affected areas, schistosomiasis infects a large proportion of children under 14 years of age. An estimated 600 to 700 million people worldwide are at risk from the disease because they live in countries where the organism is common. In 2012, 249 million people needed treatment to prevent the disease. This likely makes it the most common parasitic infection with malaria second and causing about 207 million cases in 2013.
S. haematobium, the infectious agent responsible for urogenital schistosomiasis, infects over 112 million people annually in Sub-Saharan Africa alone. It is responsible for 32 million cases of dysuria, 10 million cases of hydronephrosis, and 150,000 deaths from kidney failure annually, making S. haematobium the world's deadliest schistosome.
Deaths
Estimates regarding the number of deaths vary. Worldwide, the Global Burden of Disease Study issued in 2010 estimated 12,000 direct deaths while the WHO in 2014 estimated more than 200,000 annual deaths related to schistosomiasis. Another 20 million have severe consequences from the disease. It is the most deadly of the neglected tropical diseases.
History
The most ancient evidence of schistosomiasis dates back to more than 6,000 years ago. Studies conducted on human skeletal remains found in northern Syria (5800–4000 BC) demonstrated evidence of a terminal spined schistosome from the pelvic sediment of skeletal remains. Even though this evidence comes from the Middle East, it has been suggested that the 'cradle' of schistosomes lies in the region of the African Great Lakes, an area in which both the parasites and their intermediate hosts are in an active state of evolution. Subsequently, it is believed that schistosomiasis spread to Egypt as a result of the importation of monkeys and slaves during the reign of the fifth dynasty of the Pharaohs (ca. 2494–2345 BC).
Schistosomiasis is known as bilharzia or bilharziosis in many countries, after German physician Theodor Bilharz, who first described the cause of urinary schistosomiasis in 1851.
The first physician who described the entire disease cycle was the Brazilian parasitologist Pirajá da Silva in 1908. The earliest case known of infection was discovered in 2014, belonging to a child who lived 6,200 years ago.
The disease was a common cause of death for Egyptians in the Greco-Roman Period.
In 2016, more than 200 million people needed treatment, but only 88 million people were actually treated for schistosomiasis.
Etymology
Schistosomiasis is named for the genus of parasitic flatworm Schistosoma, a term which means 'split body'. The name Bilharzia comes from Theodor Bilharz, a German pathologist working in Egypt in 1851 who first discovered these worms. The name Katayama disease comes from the Katayama district of Hiroshima Prefecture in Japan, where schistosomiasis was once endemic.
Society and culture
Schistosomiasis is endemic in Egypt, exacerbated by the country's dam and irrigation projects along the Nile. From the late 1950s through the early 1980s, infected villagers were treated with repeated injections of tartar emetic. Epidemiological evidence suggests that this campaign unintentionally contributed to the spread of hepatitis C via unclean needles. Egypt has the world's highest hepatitis C infection rate, and the infection rates in various regions of the country closely track the timing and intensity of the anti-schistosomiasis campaign.
By the early 20th century, schistosomiasis' symptom of blood in the urine was seen as a male version of menstruation in Egypt and was thus viewed as a rite of passage for boys.
Among human parasitic diseases, schistosomiasis ranks second behind malaria in socio-economic and public health importance in tropical and subtropical areas.
Research
A proposed vaccine against S. haematobium infection called "Bilhvax" underwent a phase 3 clinical trial among children in Senegal. The results, reported in 2018, showed that it was ineffective despite provoking some immune response. Using CRISPR gene editing technology, researchers decreased the symptoms due to schistosomiasis in an animal model.
Using thromboelastography, researchers at Tufts University observed that murine blood incubated by adult worms for one hour has a coagulation profile similar to that of patients that have hemophilia or on anti-coagulant drugs, suggesting that schistosomes could possess anti-coagulant properties. Inhibiting schistosome activity in blood coagulation could potentially serve as a therapeutic option for schistosomiasis.
| Biology and health sciences | Helminthic diseases and infestations | Health |
191395 | https://en.wikipedia.org/wiki/Jade | Jade | Jade is an umbrella term for two different types of decorative rocks used for jewelry or ornaments. Jade is often referred to by either of two different silicate mineral names: nephrite (a silicate of calcium and magnesium in the amphibole group of minerals), or jadeite (a silicate of sodium and aluminum in the pyroxene group of minerals). Nephrite is typically green, although may be yellow, white or black. Jadeite varies from white or near-colorless, through various shades of green (including an emerald green, termed 'imperial'), to lavender, yellow, orange, brown and black. Rarely it may be blue.
Both of these names refer to their use as gemstones, and each has a mineralogically more specific name. Both the amphibole jade (nephrite) and pyroxene jade are mineral aggregates (rocks) rather than mineral species. Nephrite was deprecated by the International Mineralogical Association as a mineral species name in 1978 (replaced by tremolite). The name "nephrite" is mineralogically correct for referring to the rock. Jadeite, is a legitimate mineral species, differing from the pyroxene jade rock. In China, the name jadeite has been replaced with fei cui, the traditional Chinese name for this gem that was in use long before Damour created the name in 1863.
Jade is well known for its ornamental use in East Asian, South Asian, and Southeast Asian art. It is commonly used in Latin America, such as Mexico and Guatemala. The use of jade in Mesoamerica for symbolic and ideological ritual was influenced by its rarity and value among pre-Columbian Mesoamerican cultures, such as the Olmecs, the Maya, and other ancient civilizations of the Valley of Mexico.
Jade is classified into three main types: Type A, Type B, and Type C. Type A jade refers to natural, untreated jadeite jade, prized for its purity and vibrant colors. It is the most valuable and sought-after type, often characterized by its vivid green hues and high translucency. Type A jade is revered for its symbolism of purity, harmony, and protection in various cultures, especially in East Asia where it holds significant cultural and spiritual importance. Types B and C have been enhanced with resin and colourant respectively.
Etymology
The English word jade is derived (via French and Latin 'flanks, kidney area') from the Spanish term (first recorded in 1565) or 'loin stone', from its reputed efficacy in curing ailments of the loins and kidneys. Nephrite is derived from , a Latin translation of the Spanish .
History
East Asia
Prehistoric and historic China
During Neolithic times, the key known sources of nephrite jade in China for utilitarian and ceremonial jade items were the now-depleted deposits in the Ningshao area in the Yangtze River Delta (Liangzhu culture 3400–2250 BC) and in an area of the Liaoning province and Inner Mongolia (Hongshan culture 4700–2200 BC). Dushan Jade (a rock composed largely of anorthite feldspar and zoisite) was being mined as early as 6000 BC. In the Yin Ruins of the Shang Dynasty (1600 to 1050 BC) in Anyang, Dushan Jade ornaments were unearthed in the tomb of the Shang kings.
Jade was considered to be the "imperial gem" and was used to create many utilitarian and ceremonial objects, from indoor decorative items to jade burial suits. From the earliest Chinese dynasties to the present, the jade deposits most used were not only those of Khotan in the Western Chinese province of Xinjiang but other parts of China as well, such as Lantian, Shaanxi. There, white and greenish nephrite jade is found in small quarries and as pebbles and boulders in the rivers flowing from the Kuen-Lun mountain range eastward into the Takla-Makan desert area. The river jade collection is concentrated in the Yarkand, the White Jade (Yurungkash) and Black Jade (Karakash) Rivers. From the Kingdom of Khotan, on the southern leg of the Silk Road, yearly tribute payments consisting of the most precious white jade were made to the Chinese Imperial court and there worked into objets d'art by skilled artisans as jade had a status-value exceeding that of gold or silver. Jade became a favourite material for the crafting of Chinese scholars' objects, such as rests for calligraphy brushes, as well as the mouthpieces of some opium pipes, due to the belief that breathing through jade would bestow longevity upon smokers who used such a pipe.
Jadeite, with its bright emerald-green, lavender, pink, orange, yellow, red, black, white, near-colorless and brown colors was imported from Burma to China in quantity only after about 1800. The vivid white to green variety became known as fei cui (翡翠) or kingfisher jade, due to its resemblance to the feathers of the kingfisher bird. That definition was later expanded to include all other colors that the rock is found in. It quickly became almost as popular as nephrite and a favorite of Qing Dynasty's aristocracy, while scholars still had strong attachment to nephrite (white jade, or Hetian jade), which they deemed to be the symbol of a nobleman.
In the history of the art of the Chinese empire, jade has had a special significance, comparable with that of gold and diamonds in the West. Jade was used for the finest objects and cult figures, and for grave furnishings for high-ranking members of the imperial family. Due to that significance and the rising middle class in China, in 2010 the finest jade when found in nuggets of "mutton fat" jade – so-named for its marbled white consistency – could sell for $3,000 an ounce, a tenfold increase from a decade previously.
The Chinese character 玉 (yù) is used to denote the several types of stone known in English as "jade" (e.g. 玉器, jadewares), such as jadeite (硬玉, 'hard jade', another name for 翡翠) and nephrite (軟玉, 'soft jade'). While still in use, the terms "hard jade" and "soft jade" resulted from a mistranslation by a Japanese geologist, and should be avoided.
But because of the value added culturally to jades throughout Chinese history, the word has also come to refer more generally to precious or ornamental stones, and is very common in more symbolic usage as in phrases like 拋磚引玉/抛砖引玉 (lit. "casting a brick (i.e. the speaker's own words) to draw a jade (i.e. pearls of wisdom from the other party)"), 玉容 (a beautiful face; "jade countenance"), and 玉立 (slim and graceful; "jade standing upright"). The character has a similar range of meanings when appearing as a radical as parts of other characters.
Prehistoric and historic Japan
Jade in Japan was used for jade bracelets. It was a symbol of wealth and power. Leaders also used jade in rituals. It is the national stone of Japan.
Examples of use in Japan can be traced back to the early Jomon period about 7,000 years ago. XRF analysis results have revealed that all jade used in Japan since the Jomon period is from Itoigawa.
The jade culture that blossomed in ancient Japan respected green ones, and jade of other colors was not used. There is a theory that the reason why the meaning is that it was believed that the color of green enables the reproduction of fertility, the life, and the soul of the earth.
Prehistoric and historic Korea
The use of jade and other greenstone was a long-term tradition in Korea ( – AD 668). Jade is found in small numbers of pit-houses and burials. The craft production of small comma-shaped and tubular "jades" using materials such as jade, microcline, jasper, etc., in southern Korea originates from the Middle Mumun Pottery Period (–550 BC). Comma-shaped jades are found on some of the gold crowns of Silla royalty (/400–668 AD) and sumptuous elite burials of the Korean Three Kingdoms. After the state of Silla united the Korean Peninsula in 668, the widespread popularisation of death rituals related to Buddhism resulted in the decline of the use of jade in burials as prestige mortuary goods.
South Asia
India
The Jain temple of Kolanpak in the Nalgonda district, Telangana, India is home to a high sculpture of Mahavira that is carved entirely out of jade. India is also noted for its craftsman tradition of using large amounts of green serpentine or false jade obtained primarily from Afghanistan in order to fashion jewellery and ornamental items such as sword hilts and dagger handles.
The Salar Jung Museum in Hyderabad has a wide range of jade hilted daggers, mostly owned by the former Sultans of Hyderabad.
Southeast Asia
Myanmar
Today, it is estimated that Myanmar is the origin of upwards of 70% of the world's supply of high-quality jadeite. Most of the jadeite mined in Myanmar is not cut for use in Myanmar, instead being transported to other nations, primarily in Asia, for use in jewelry and other products. The jadeite deposits found in Kachinland, in Myanmar's northern regions is the highest quality jadeite in the world, considered precious by sources in China going as far back as the 10th century.
Jadeite in Myanmar is primarily found in the "Jade Tract" located in Lonkin Township in Kachin State in northern Myanmar which encompasses the alluvial region of the Uyu River between the 25th and 26th parallels. Present-day extraction of jade in this region occurs at the Phakant-gyi, Maw Sisa, Tin Tin, and Khansee mines. Khansee is also the only mine that produces maw sit sit, a kosmochlor-rich jade rock. Mines at Tawmaw and Hweka are mostly exhausted. From 1964 to 1981, mining was exclusively an enterprise of the Myanmar government. In 1981, 1985, and 1995, the Gemstone laws were modified to allow increasing private enterprise. In addition to this region, there are also notable mines in the neighboring Sagaing District, near the towns of Nasibon and Natmaw and Hkamti. Sagaing is a district in Myanmar proper, not a part of the ethic Kachin State.
Southeast Asia
Carved nephrite jade was the main commodity trade of an extensive prehistoric trading network connecting multiple areas in Southeast Asia. The nephrite jade was mined in eastern Taiwan by the animist Taiwanese indigenous peoples and processed mostly in the Philippines by the animist indigenous Filipinos. Some were also processed in Vietnam, while the peoples of Brunei, Cambodia, Indonesia, Malaysia, Singapore, and Thailand also participated in the massive animist-led nephrite jade trading network, where other commodities were also traded. Participants in the network at the time had a majority animist population. The maritime road is one of the most extensive sea-based trade networks of a single geological material in the prehistoric world. It was in existence for at least 3,000 years, where its peak production was from 2000 BCE to 500 CE, older than the Silk Road in mainland Eurasia. It began to wane during its final centuries from 500 CE until 1000 CE. The entire period of the network was a golden age for the diverse animist societies of the region.
Others
Māori
Nephrite jade in New Zealand is known as pounamu in the Māori language (often called "greenstone" in New Zealand English), and plays an important role in Māori culture. It is considered a taonga, or treasure, and therefore protected under the Treaty of Waitangi, and the exploitation of it is restricted and closely monitored. It is found only in the South Island of New Zealand, known as Te Wai Pounamu in Māori—"The [land of] Greenstone Water", or Te Wahi Pounamu—"The Place of Greenstone".
Pounamu taonga increase in mana (prestige) as they pass from one generation to another. The most prized taonga are those with known histories going back many generations. These are believed to have their own mana and were often given as gifts to seal important agreements.
Tools, weapons and ornaments were made of it; in particular adzes, the 'mere' (short club), and the hei-tiki (neck pendant). Nephrite jewellery of Maori design is widely popular with locals and tourists, although some of the jade used for these is now imported from British Columbia and elsewhere.
Pounamu taonga include tools such as toki (adzes), whao (chisels), whao whakakōka (gouges), ripi pounamu (knives), scrapers, awls, hammer stones, and drill points. Hunting tools include matau (fishing hooks) and lures, spear points, and kākā poria (leg rings for fastening captive birds); weapons such as mere (short handled clubs); and ornaments such as pendants (hei-tiki, hei matau and pekapeka), ear pendants (kuru and kapeu), and cloak pins.
Functional pounamu tools were widely worn for both practical and ornamental reasons, and continued to be worn as purely ornamental pendants (hei kakï) even after they were no longer used as tools.
Mesoamerica
Jade was a rare and valued material in pre-Columbian Mesoamerica. The only source from which the various indigenous cultures, such as the Olmec and Maya, could obtain jade was located in the Motagua River valley in Guatemala. Jade was largely an elite good, and was usually carved in various ways, whether serving as a medium upon which hieroglyphs were inscribed, or shaped into symbolic figurines. Generally, the material was highly symbolic, and it was often employed in the performance of ideological practices and rituals.
Canada
Jade was first identified in Canada by Chinese settlers in 1886 in British Columbia. At this time jade was considered worthless because the settlers were searching for gold. Jade was not commercialized in Canada until the 1970s. The mining business Loex James Ltd., which was started by two Californians, began commercial mining of Canadian jade in 1972.
Mining is done from large boulders that contain bountiful deposits of jade. Jade is exposed using diamond-tipped core drills in order to extract samples. This is done to ensure that the jade meets requirements. Hydraulic spreaders are then inserted into cleavage points in the rock so that the jade can be broken away. Once the boulders are removed and the jade is accessible, it is broken down into more manageable 10-tonne pieces using water-cooled diamond saws. The jade is then loaded onto trucks and transported to the proper storage facilities.
Russia
Russia imported jade from China for a long time, but in the 1860s its own jade deposits were found in Siberia. Today, the main deposits of jade are located in Eastern Siberia, but jade is also extracted in the Polar Urals and in the Krasnoyarsk territory (Kantegirskoye and Kurtushibinskoye deposits). Russian raw jade reserves are estimated at 336 tons.
Russian jade culture is closely connected with such jewellery production as Fabergé, whose workshops combined the green stone with gold, diamonds, emeralds, and rubies.
Siberia
In the 1950s and 1960s, there was a strong belief among many Siberians, which stemmed from tradition, that jade was part of a class of sacred objects that had life.
Mongolia
In the 1950s and 1960s, there was a strong belief among many Mongolians, which came from ancient tradition, that jade was part of a class of sacred objects that had life.
Gallery
The mineral
Nephrite and jadeite
It was not until 1863 that French mineralogist Alexis Damour determined that what was referred to as "jade" could in fact be one of two different minerals, either nephrite or jadeite.
Nephrite consists of a microcrystalline interlocking fibrous matrix of the calcium, magnesium-iron rich amphibole mineral series tremolite (calcium-magnesium)-ferroactinolite (calcium-magnesium-iron). The middle member of this series with an intermediate composition is called actinolite (the silky fibrous mineral form is one form of asbestos). The higher the iron content, the greener the colour. Tremolite occurs in metamorphosed dolomitic limestones, and actinolite in metamorphic greenschists/glaucophane schists.
Jadeite is a sodium- and aluminium-rich pyroxene. The more precious kind of jade, this is a microcrystalline interlocking growth of crystals (not a fibrous matrix as nephrite is.) It only occurs in metamorphic rocks.
Both nephrite and jadeite were used from prehistoric periods for hardstone carving. Jadeite has about the same hardness (between 6.0 and 7.0 Mohs hardness) as quartz, while nephrite is slightly softer (6.0 to 6.5) and so can be worked with quartz or garnet sand, and polished with bamboo or even ground jade. However nephrite is tougher and more resistant to breakage. Among the earliest known jade artifacts excavated from prehistoric sites are simple ornaments with bead, button, and tubular shapes. Additionally, jade was used for adze heads, knives, and other weapons, which can be delicately shaped.
As metal-working technologies became available, the beauty of jade made it valuable for ornaments and decorative objects.
Unusual varieties
The name Nephrite derives from the Greek word meaning "kidney". This is because in ancient times it was believed that wearing this kind of jade around the waist could cure kidney disease.
Nephrite can be found in a creamy white form (known in China as "mutton fat" jade) as well as in a variety of light green colours, whereas jadeite shows more colour variations, including blue, brown, red, black, dark green, lavender and white. Of the two, jadeite is rarer, documented in fewer than 12 places worldwide. Translucent emerald-green jadeite is the most prized variety, both historically and today. As "quetzal" jade, bright green jadeite from Guatemala was treasured by Mesoamerican cultures, and as "kingfisher" jade, vivid green rocks from Burma became the preferred stone of post-1800 Chinese imperial scholars and rulers. Burma (Myanmar) and Guatemala are the principal sources of modern gem jadeite. In the area of Mogaung in the Myitkyina District of Upper Burma, jadeite formed a layer in the dark-green serpentine, and has been quarried and exported for well over a hundred years. Canada provides the major share of modern lapidary nephrite.
Enhancement
Jade may be enhanced (sometimes called "stabilized"). Some merchants will refer to these as grades, but degree of enhancement is different from colour and texture quality. In other words, Type A jadeite is not enhanced but can have poor colour and texture. There are three main methods of enhancement, sometimes referred to as the ABC Treatment System:
Type A jadeite has not been treated in any way except surface waxing.
Type B treatment involves exposing a promising but stained piece of jadeite to chemical bleaches and/or acids and impregnating it with a clear polymer resin. This results in a significant improvement of transparency and colour of the material. Currently, infrared spectroscopy is the most accurate test for the detection of polymer in jadeite.
Type C jade has been artificially stained or dyed. The effects are somewhat uncontrollable and may result in a dull brown. In any case, translucency is usually lost.
B+C jade is a combination of B and C: it has been both impregnated and artificially stained.
Type D jade refers to a composite stone such as a doublet comprising a jade top with a plastic backing.
Industry
Myanmar
| Physical sciences | Mineral gemstones | null |
191490 | https://en.wikipedia.org/wiki/Machine%20tool | Machine tool | A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material.
The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools.
Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation.
With their inherent precision, machine tools enabled the economical production of interchangeable parts.
Nomenclature and key concepts, interrelated
Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels may or may not fall under this definition, depending on how one views the headstock spindle itself; but the earliest historical records of a lathe with direct mechanical control of the cutting tool's path are of a screw-cutting lathe dating to about 1483. This lathe "produced screw threads out of wood and employed a true compound slide rest".
The mechanical toolpath guidance grew out of various root concepts:
First is the spindle concept itself, which constrains workpiece or tool movement to rotation around a fixed axis. This ancient concept predates machine tools per se; the earliest lathes and potter's wheels incorporated it for the workpiece, but the movement of the tool itself on these machines was entirely freehand.
The machine slide (tool way), which has many forms, such as dovetail ways, box ways, or cylindrical column ways. Machine slides constrain tool or workpiece movement linearly. If a stop is added, the length of the line can also be accurately controlled. (Machine slides are essentially a subset of linear bearings, although the language used to classify these various machine elements may be defined differently by some users in some contexts, and some elements may be distinguished by contrasting with others)
Tracing, which involves following the contours of a model or template and transferring the resulting motion to the toolpath.
cam operation, which is related in principle to tracing but can be a step or two removed from the traced element's matching the reproduced element's final shape. For example, several cams, no one of which directly matches the desired output shape, can actuate a complex toolpath by creating component vectors that add up to a net toolpath.
Van Der Waals Force between like materials is high; freehand manufacture of square plates, produces only square, flat, machine tool building reference components, accurate to millionths of an inch, but of nearly no variety. The process of feature replication allows the flatness and squareness of a milling machine cross slide assembly, or the roundness, lack of taper, and squareness of the two axes of a lathe machine to be transferred to a machined work piece with accuracy and precision better than a thousandth of an inch, not as fine as millionths of an inch. As the fit between sliding parts of a made product, machine, or machine tool approaches this critical thousandth of an inch measurement, lubrication and capillary action combine to prevent Van Der Waals force from welding like metals together, extending the lubricated life of sliding parts by a factor of thousands to millions; the disaster of oil depletion in the conventional automotive engine is an accessible demonstration of the need, and in aerospace design, like-to-unlike design is used along with solid lubricants to prevent Van Der Waals welding from destroying mating surfaces. Given the modulus of elasticity of metals, the range of fit tolerances near one thousandth of an inch correlates to the relevant range of constraint between at one extreme, permanent assembly of two mating parts and at the other, a free sliding fit of those same two parts.
Abstractly programmable toolpath guidance began with mechanical solutions, such as in musical box cams and Jacquard looms. The convergence of programmable mechanical control with machine tool toolpath control was delayed many decades, in part because the programmable control methods of musical boxes and looms lacked the rigidity for machine tool toolpaths. Later, electromechanical solutions (such as servos) and soon electronic solutions (including computers) were added, leading to numerical control and computer numerical control.
When considering the difference between freehand toolpaths and machine-constrained toolpaths, the concepts of accuracy and precision, efficiency, and productivity become important in understanding why the machine-constrained option adds value.
Matter-Additive, Matter-Preserving, and Matter-Subtractive "Manufacturing" can proceed in sixteen ways: Firstly, the work may be held either in a hand, or a clamp; secondly, the tool may be held either in a hand, or a clamp; thirdly, the energy can come from either the hand(s) holding the tool and/or the work, or from some external source, including for examples a foot treadle by the same worker, or a motor, without limitation; and finally, the control can come from either the hand(s) holding the tool and/or the work, or from some other source, including computer numerical control. With two choices for each of four parameters, the types are enumerated to sixteen types of Manufacturing, where Matter-Additive might mean painting on canvas as readily as it might mean 3D printing under computer control, Matter-Preserving might mean forging at the coal fire as readily as stamping license plates, and Matter-Subtracting might mean casually whittling a pencil point as readily as it might mean precision grinding the final form of a laser deposited turbine blade.
A precise description of what a machine tool is and does in an instant moment is given by a 12 component vector relating the linear and rotational degrees of freedom of the single work piece and the single tool contacting that work piece in any machine arbitrarily and in order to visualize this vector it makes sense to arrange it in four rows of three columns with labels x y and z on the columns and labels spin and move on the rows, with those two labels repeated one more time to make a total of four rows so that the first row might be labeled spin work, the second row might be labeled move work, the third row might be labeled spin tool, and the fourth row might be labeled move tool although the position of the labels is arbitrary which is to say there is no agreement in the literature of mechanical engineering on what order these labels should be but there are 12 degrees of freedom in a machine tool. That said it is important to remember that this is in an instant moment and that instant moment may be a preparatory moment before a tool makes contact with a work piece, or maybe an engaged moment during which contact with work and tool requires an input of rather large amounts of power to get work done which is why machine tools are large and heavy and stiff. Since what these vectors describe our instant moments of degrees of freedom the vector structure is capable of expressing the changing mode of a machine tool as well as expressing its fundamental structure in the following way: imagine a lathe spending a cylinder on a horizontal axis with a tool ready to cut a face on that cylinder in some preparatory moment. What the operator of such a lathe would do is lock the x-axis on the carriage of the lathe establishing a new vector condition with a zero in the x slide position for the tool. Then the operator would unlock the y-axis on the cross slide of the lathe, assuming that our examples were equipped with that, and then the operator would apply some method of traversing the facing tool across the face of the cylinder being cut and a depth combined with the rotational speed selected which engages cutting ability within the power of range of the motor powering the lathe. So the answer to what a machine tool is, is a very simple answer but it is highly technical and is unrelated to the history of machine tools.
Preceding, there is an answer for what machine tools are. We may consider what they do also. Machine tools produce finished surfaces. They may produce any finish from an arbitrary degree of very rough work to a specular optical grade finish the improvement of which is moot. Machine tools produce the surfaces comprising the features of machine parts by removing chips. These chips may be very rough or even as fine as dust. Every machine tools supports its removal process with a stiff, redundant and so vibration resisting structure because each chip is removed in a semi a synchronous way, creating multiple opportunities for vibration to interfere with precision.
Humans are generally quite talented in their freehand movements; the drawings, paintings, and sculptures of artists such as Michelangelo or Leonardo da Vinci, and of countless other talented people, show that human freehand toolpath has great potential. The value that machine tools added to these human talents is in the areas of rigidity (constraining the toolpath despite thousands of newtons (pounds) of force fighting against the constraint), accuracy and precision, efficiency, and productivity. With a machine tool, toolpaths that no human muscle could constrain can be constrained; and toolpaths that are technically possible with freehand methods, but would require tremendous time and skill to execute, can instead be executed quickly and easily, even by people with little freehand talent (because the machine takes care of it). The latter aspect of machine tools is often referred to by historians of bytechnology as "building the skill into the tool", in contrast to the toolpath-constraining skill being in the person who wields the tool. As an example, it is physically possible to make interchangeable screws, bolts, and nuts entirely with freehand toolpaths. But it is economically practical to make them only with machine tools.
In the 1930s, the U.S. National Bureau of Economic Research (NBER) referenced the definition of a machine tool as "any machine operating by other than hand power which employs a tool to work on metal".
The narrowest colloquial sense of the term reserves it only for machines that perform metal cutting—in other words, the many kinds of [conventional] machining and grinding. These processes are a type of deformation that produces swarf. However, economists use a slightly broader sense that also includes metal deformation of other types that squeeze the metal into shape without cutting off swarf, such as rolling, stamping with dies, shearing, swaging, riveting, and others. Thus presses are usually included in the economic definition of machine tools. For example, this is the breadth of definition used by Max Holland in his history of Burgmaster and Houdaille, which is also a history of the machine tool industry in general from the 1940s through the 1980s; he was reflecting the sense of the term used by Houdaille itself and other firms in the industry. Many reports on machine tool export and import and similar economic topics use this broader definition.
The colloquial sense implying [conventional] metal cutting is also growing obsolete because of changing technology over the decades. The many more recently developed processes labeled "machining", such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, or even plasma cutting and water jet cutting, are often performed by machines that could most logically be called machine tools. In addition, some of the newly developed additive manufacturing processes, which are not about cutting away material but rather about adding it, are done by machines that are likely to end up labeled, in some cases, as machine tools. In fact, machine tool builders are already developing machines that include both subtractive and additive manufacturing in one work envelope, and retrofits of existing machines are underway.
The natural language use of the terms varies, with subtle connotative boundaries. Many speakers resist using the term "machine tool" to refer to woodworking machinery (joiners, table saws, routing stations, and so on), but it is difficult to maintain any true logical dividing line, and therefore many speakers accept a broad definition. It is common to hear machinists refer to their machine tools simply as "machines". Usually the mass noun "machinery" encompasses them, but sometimes it is used to imply only those machines that are being excluded from the definition of "machine tool". This is why the machines in a food-processing plant, such as conveyors, mixers, vessels, dividers, and so on, may be labeled "machinery", while the machines in the factory's tool and die department are instead called "machine tools" in contradistinction.
Regarding the 1930s NBER definition quoted above, one could argue that its specificity to metal is obsolete, as it is quite common today for particular lathes, milling machines, and machining centers (definitely machine tools) to work exclusively on plastic cutting jobs throughout their whole working lifespan. Thus the NBER definition above could be expanded to say "which employs a tool to work on metal or other materials of high hardness". And its specificity to "operating by other than hand power" is also problematic, as machine tools can be powered by people if appropriately set up, such as with a treadle (for a lathe) or a hand lever (for a shaper). Hand-powered shapers are clearly "the 'same thing' as shapers with electric motors except smaller", and it is trivial to power a micro lathe with a hand-cranked belt pulley instead of an electric motor. Thus one can question whether power source is truly a key distinguishing concept; but for economics purposes, the NBER's definition made sense, because most of the commercial value of the existence of machine tools comes about via those that are powered by electricity, hydraulics, and so on. Such are the vagaries of natural language and controlled vocabulary, both of which have their places in the business world.
History
Forerunners of machine tools included bow drills and potter's wheels, which had existed in ancient Egypt prior to 2500 BC, and lathes, known to have existed in multiple regions of Europe since at least 1000 to 500 BC. But it was not until the later Middle Ages and the Age of Enlightenment that the modern concept of a machine tool—a class of machines used as tools in the making of metal parts, and incorporating machine-guided toolpath—began to evolve. Clockmakers of the Middle Ages and renaissance men such as Leonardo da Vinci helped expand humans' technological milieu toward the preconditions for industrial machine tools. During the 18th and 19th centuries, and even in many cases in the 20th, the builders of machine tools tended to be the same people who would then use them to produce the end products (manufactured goods). However, from these roots also evolved an industry of machine tool builders as we define them today, meaning people who specialize in building machine tools for sale to others.
Historians of machine tools often focus on a handful of major industries that most spurred machine tool development. In order of historical emergence, they have been firearms (small arms and artillery); clocks; textile machinery; steam engines (stationary, marine, rail, and otherwise) (the story of how Watt's need for an accurate cylinder spurred Boulton's boring machine is discussed by Roe); sewing machines; bicycles; automobiles; and aircraft. Others could be included in this list as well, but they tend to be connected with the root causes already listed. For example, rolling-element bearings are an industry of themselves, but this industry's main drivers of development were the vehicles already listed—trains, bicycles, automobiles, and aircraft; and other industries, such as tractors, farm implements, and tanks, borrowed heavily from those same parent industries.
Machine tools filled a need created by textile machinery during the Industrial Revolution in England in the middle to late 1700s. Until that time, machinery was made mostly from wood, often including gearing and shafts. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron. Cast iron could be cast in molds for larger parts, such as engine cylinders and gears, but was difficult to work with a file and could not be hammered. Red hot wrought iron could be hammered into shapes. Room temperature wrought iron was worked with a file and chisel and could be made into gears and other complex parts; however, hand working lacked precision and was a slow and expensive process.
James Watt was unable to have an accurately bored cylinder for his first steam engine, trying for several years until John Wilkinson invented a suitable boring machine in 1774, boring Boulton & Watt's first commercial engine in 1776.
The advance in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. That Maudslay had established the manufacture and use of master plane gages in his shop (Maudslay & Field) located on Westminster Road south of the Thames River in London about 1809, was attested to by James Nasmyth who was employed by Maudslay in 1829 and Nasmyth documented their use in his autobiography.
The process by which the master plane gages were produced dates back to antiquity but was refined to an unprecedented degree in the Maudslay shop. The process begins with three square plates each given an identification (ex., 1,2 and 3). The first step is to rub plates 1 and 2 together with a marking medium (called bluing today) revealing the high spots which would be removed by hand scraping with a steel scraper, until no irregularities were visible. This would not produce true plane surfaces but a "ball and socket" concave-concave and convex-convex fit, as this mechanical fit, like two perfect planes, can slide over each other and reveal no high spots. The rubbing and marking are repeated after rotating 2 relative to 1 by 90 degrees to eliminate concave-convex "potato-chip" curvature. Next, plate number 3 is compared and scraped to conform to plate number 1 in the same two trials. In this manner plates number 2 and 3 would be identical. Next plates number 2 and 3 would be checked against each other to determine what condition existed, either both plates were "balls" or "sockets" or "chips" or a combination. These would then be scraped until no high spots existed and then compared to plate number 1. Repeating this process of comparing and scraping the three plates could produce plane surfaces accurate to within millionths of an inch (the thickness of the marking medium).
The traditional method of producing the surface gages used an abrasive powder rubbed between the plates to remove the high spots, but it was Whitworth who contributed the refinement of replacing the grinding with hand scraping. Sometime after 1825, Whitworth went to work for Maudslay and it was there that Whitworth perfected the hand scraping of master surface plane gages. In his paper presented to the British Association for the Advancement of Science at Glasgow in 1840, Whitworth pointed out the inherent inaccuracy of grinding due to no control and thus unequal distribution of the abrasive material between the plates which would produce uneven removal of material from the plates.
With the creation of master plane gages of such high accuracy, all critical components of machine tools (i.e., guiding surfaces such as machine ways) could then be compared against them and scraped to the desired accuracy.
The first machine tools offered for sale (i.e., commercially available) were constructed by Matthew Murray in England around 1800. Others, such as Henry Maudslay, James Nasmyth, and Joseph Whitworth, soon followed the path of expanding their entrepreneurship from manufactured end products and millwright work into the realm of building machine tools for sale.
Important early machine tools included the slide rest lathe, screw-cutting lathe, turret lathe, milling machine, pattern tracing lathe, shaper, and metal planer, which were all in use before 1840. With these machine tools the decades-old objective of producing interchangeable parts was finally realized. An important early example of something now taken for granted was the standardization of screw fasteners such as nuts and bolts. Before about the beginning of the 19th century, these were used in pairs, and even screws of the same machine were generally not interchangeable. Methods were developed to cut screw thread to a greater precision than that of the feed screw in the lathe being used. This led to the bar length standards of the 19th and early 20th centuries.
American production of machine tools was a critical factor in the Allies' victory in World War II. Production of machine tools tripled in the United States in the war. No war was more industrialized than World War II, and it has been written that the war was won as much by machine shops as by machine guns.
The production of machine tools is concentrated in about 10 countries worldwide: China, Japan, Germany, Italy, South Korea, Taiwan, Switzerland, US, Austria, Spain and a few others. Machine tool innovation continues in several public and private research centers worldwide.
Drive power sources
Machine tools can be powered from a variety of sources. Human and animal power (via cranks, treadles, treadmills, or treadwheels) were used in the past, as was water power (via water wheel); however, following the development of high-pressure steam engines in the mid 19th century, factories increasingly used steam power. Factories also used hydraulic and pneumatic power. Many small workshops continued to use water, human and animal power until electrification after 1900.
Today most machine tools are powered by electricity; hydraulic and pneumatic power are sometimes used, but this is uncommon.
Automatic control
Machine tools can be operated manually, or under automatic control. Early machines used flywheels to stabilize their motion and had complex systems of gears and levers to control the machine and the piece being worked on. Soon after World War II, the numerical control (NC) machine was developed. NC machines used a series of numbers punched on paper tape or punched cards to control their motion. In the 1960s, computers were added to give even more flexibility to the process. Such machines became known as computerized numerical control (CNC) machines. NC and CNC machines could precisely repeat sequences over and over, and could produce much more complex pieces than even the most skilled tool operators.
Before long, the machines could automatically change the specific cutting and shaping tools that were being used. For example, a drill machine might contain a magazine with a variety of drill bits for producing holes of various sizes. Previously, either machine operators would usually have to manually change the bit or move the work piece to another station to perform these different operations. The next logical step was to combine several different machine tools together, all under computer control. These are known as machining centers, and have dramatically changed the way parts are made.
Examples
Examples of machine tools are:
Broaching machine
Drill press
Gear shaper
Hobbing machine
Hone
Lathe
Honing Machine
Screw machines
Milling machine
Shear (sheet metal)
Shaper
Bandsaw Saws
Planer
Stewart platform mills
Grinding machines
Multitasking machines (MTMs)—CNC machine tools with many axes that combine turning, milling, grinding, and material handling into one highly automated machine tool
When fabricating or shaping parts, several techniques are used to remove unwanted metal. Among these are:
Electrical discharge machining
Grinding (abrasive cutting)
Multiple edge cutting tools
Single edge cutting tools
Other techniques are used to add desired material. Devices that fabricate components by selective addition of material are called rapid prototyping machines.
Adverse effects on humans
Adverse effects mitigations
Regulations
Machine tool manufacturing industry
The worldwide market for machine tools was approximately $81 billion in production in 2014 according to a survey by market research firm Gardner Research. The largest producer of machine tools was China with $23.8 billion of production followed by Germany and Japan at neck and neck with $12.9 billion and $12.88 billion respectively. South Korea and Italy rounded out the top 5 producers with revenue of $5.6 billion and $5 billion respectively.
Safety
| Technology | Industrial machinery | null |
191538 | https://en.wikipedia.org/wiki/H%C3%B6lder%27s%20inequality | Hölder's inequality | In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of spaces.
The numbers and above are said to be Hölder conjugates of each other. The special case gives a form of the Cauchy–Schwarz inequality. Hölder's inequality holds even if is infinite, the right-hand side also being infinite in that case. Conversely, if is in and is in , then the pointwise product is in .
Hölder's inequality is used to prove the Minkowski inequality, which is the triangle inequality in the space , and also to establish that is the dual space of for .
Hölder's inequality (in a slightly different form) was first found by . Inspired by Rogers' work, gave another proof as part of a work developing the concept of convex and concave functions and introducing Jensen's inequality, which was in turn named for work of Johan Jensen building on Hölder's work.
Remarks
Conventions
The brief statement of Hölder's inequality uses some conventions.
In the definition of Hölder conjugates, means zero.
If , then and stand for the (possibly infinite) expressions
If , then stands for the essential supremum of , similarly for .
The notation with is a slight abuse, because in general it is only a norm of if is finite and is considered as equivalence class of -almost everywhere equal functions. If and , then the notation is adequate.
On the right-hand side of Hölder's inequality, 0 × ∞ as well as ∞ × 0 means 0. Multiplying with ∞ gives ∞.
Estimates for integrable products
As above, let and denote measurable real- or complex-valued functions defined on . If is finite, then the pointwise products of with and its complex conjugate function are -integrable, the estimate
and the similar one for hold, and Hölder's inequality can be applied to the right-hand side. In particular, if and are in the Hilbert space , then Hölder's inequality for implies
where the angle brackets refer to the inner product of . This is also called Cauchy–Schwarz inequality, but requires for its statement that and are finite to make sure that the inner product of and is well defined. We may recover the original inequality (for the case ) by using the functions and in place of and .
Generalization for probability measures
If is a probability space, then just need to satisfy , rather than being Hölder conjugates. A combination of Hölder's inequality and Jensen's inequality implies that
for all measurable real- or complex-valued functions and on .
Notable special cases
For the following cases assume that and are in the open interval with .
Counting measure
For the -dimensional Euclidean space, when the set is with the counting measure, we have
Often the following practical form of this is used, for any :
For more than two sums, the following generalisation (, ) holds, with real positive exponents and :
Equality holds iff .
If with the counting measure, then we get Hölder's inequality for sequence spaces:
Lebesgue measure
If is a measurable subset of with the Lebesgue measure, and and are measurable real- or complex-valued functions on , then Hölder's inequality is
Probability measure
For the probability space let denote the expectation operator. For real- or complex-valued random variables and on Hölder's inequality reads
Let and define Then is the Hölder conjugate of Applying Hölder's inequality to the random variables and we obtain
In particular, if the th absolute moment is finite, then the th absolute moment is finite, too. (This also follows from Jensen's inequality.)
Product measure
For two σ-finite measure spaces and define the product measure space by
where is the Cartesian product of and , the arises as product σ-algebra of and , and denotes the product measure of and . Then Tonelli's theorem allows us to rewrite Hölder's inequality using iterated integrals: If and are real- or complex-valued functions on the Cartesian product , then
This can be generalized to more than two measure spaces.
Vector-valued functions
Let denote a measure space and suppose that and are -measurable functions on , taking values in the -dimensional real- or complex Euclidean space. By taking the product with the counting measure on , we can rewrite the above product measure version of Hölder's inequality in the form
If the two integrals on the right-hand side are finite, then equality holds if and only if there exist real numbers , not both of them zero, such that
for -almost all in .
This finite-dimensional version generalizes to functions and taking values in a normed space which could be for example a sequence space or an inner product space.
Proof of Hölder's inequality
There are several proofs of Hölder's inequality; the main idea in the following is Young's inequality for products.
Alternative proof using Jensen's inequality:
We could also bypass use of both Young's and Jensen's inequalities. The proof below also explains why and where the Hölder exponent comes in naturally.
Extremal equality
Statement
Assume that and let denote the Hölder conjugate. Then for every ,
where max indicates that there actually is a maximizing the right-hand side. When and if each set in the with contains a subset with (which is true in particular when is ), then
Proof of the extremal equality:
Remarks and examples
The equality for fails whenever there exists a set of infinite measure in the -field with that has no subset that satisfies: (the simplest example is the -field containing just the empty set and and the measure with ) Then the indicator function satisfies but every has to be -almost everywhere constant on because it is -measurable, and this constant has to be zero, because is -integrable. Therefore, the above supremum for the indicator function is zero and the extremal equality fails.
For the supremum is in general not attained. As an example, let and the counting measure. Define:
Then For with let denote the smallest natural number with Then
Applications
The extremal equality is one of the ways for proving the triangle inequality for all and in , see Minkowski inequality.
Hölder's inequality implies that every defines a bounded (or continuous) linear functional on by the formula
The extremal equality (when true) shows that the norm of this functional as element of the continuous dual space coincides with the norm of in (see also the article).
Generalization with more than two functions
Statement
Assume that and such that
where 1/∞ is interpreted as 0 in this equation. Then for all measurable real or complex-valued functions defined on ,
where we interpret any product with a factor of ∞ as ∞ if all factors are positive, but the product is 0 if any factor is 0.
In particular, if for all then
Note: For contrary to the notation, is in general not a norm because it doesn't satisfy the triangle inequality.
Proof of the generalization:
Interpolation
Let and let denote weights with . Define as the weighted harmonic mean, that is,
Given measurable real- or complex-valued functions on , then the above generalization of Hölder's inequality gives
In particular, taking gives
Specifying further and , in the case we obtain the interpolation result
An application of Hölder gives
Both Littlewood and Lyapunov imply that if then for all
Reverse Hölder inequalities
Two functions
Assume that and that the measure space satisfies . Then for all measurable real- or complex-valued functions and on such that for all ,
If
then the reverse Hölder inequality is an equality if and only if
Note: The expressions:
and
are not norms, they are just compact notations for
Multiple functions
The Reverse Hölder inequality (above) can be generalized to the case of multiple functions if all but one conjugate is negative.
That is,
Let and be such that (hence ). Let be measurable functions for . Then
This follows from the symmetric form of the Hölder inequality (see below).
Symmetric forms of Hölder inequality
It was observed by Aczél and Beckenbach that Hölder's inequality can be put in a more symmetric form, at the price of introducing an extra vector (or function):
Let be vectors with positive entries and such that for all . If are nonzero real numbers such that , then:
if all but one of are positive;
if all but one of are negative.
The standard Hölder inequality follows immediately from this symmetric form (and in fact is easily seen to be equivalent to it). The symmetric statement also implies the reverse Hölder inequality (see above).
The result can be extended to multiple vectors:
Let be vectors in with positive entries and such that for all . If are nonzero real numbers such that , then:
if all but one of the numbers are positive;
if all but one of the numbers are negative.
As in the standard Hölder inequalities, there are corresponding statements for infinite sums and integrals.
Conditional Hölder inequality
Let be a probability space, a , and Hölder conjugates, meaning that . Then for all real- or complex-valued random variables and on ,
Remarks:
If a non-negative random variable has infinite expected value, then its conditional expectation is defined by
On the right-hand side of the conditional Hölder inequality, 0 times ∞ as well as ∞ times 0 means 0. Multiplying with ∞ gives ∞.
Proof of the conditional Hölder inequality:
Hölder's inequality for increasing seminorms
Let be a set and let be the space of all complex-valued functions on . Let be an increasing seminorm on meaning that, for all real-valued functions we have the following implication (the seminorm is also allowed to attain the value ∞):
Then:
where the numbers and are Hölder conjugates.
Remark: If is a measure space and is the upper Lebesgue integral of then the restriction of to all functions gives the usual version of Hölder's inequality.
Distances based on Hölder inequality
Hölder inequality can be used to define statistical dissimilarity measures between probability distributions. Those Hölder divergences are projective: They do not depend on the normalization factor of densities.
| Mathematics | Other algebra topics | null |
191646 | https://en.wikipedia.org/wiki/OLED | OLED | An organic light-emitting diode (OLED), also known as organic electroluminescent (organic EL) diode, is a type of light-emitting diode (LED) in which the emissive electroluminescent layer is an organic compound film that emits light in response to an electric current. This organic layer is situated between two electrodes; typically, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens, computer monitors, and portable systems such as smartphones and handheld game consoles. A major area of research is the development of white OLED devices for use in solid-state lighting applications.
There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell (LEC) which has a slightly different mode of operation. An OLED display can be driven with a passive-matrix (PMOLED) or active-matrix (AMOLED) control scheme. In the PMOLED scheme, each row and line in the display is controlled sequentially, one by one, whereas AMOLED control uses a thin-film transistor (TFT) backplane to directly access and switch each individual pixel on or off, allowing for higher resolution and larger display sizes. OLEDs are fundamentally different from LEDs, which are based on a p-n diode crystalline solid structure. In LEDs, doping is used to create p- and n-regions by changing the conductivity of the host semiconductor. OLEDs do not employ a crystalline p-n structure. Doping of OLEDs is used to increase radiative efficiency by direct modification of the quantum-mechanical optical recombination rate. Doping is additionally used to determine the wavelength of photon emission.
OLED displays are made in a similar way to LCDs, including manufacturing of several displays on a mother substrate that is later thinned and cut into several displays. Substrates for OLED displays come in the same sizes as those used for manufacturing LCDs. For OLED manufacture, after the formation of TFTs (for active matrix displays), addressable grids (for passive matrix displays), or indium tin oxide (ITO) segments (for segment displays), the display is coated with hole injection, transport and blocking layers, as well with electroluminescent material after the first two layers, after which ITO or metal may be applied again as a cathode. Later, the entire stack of materials is encapsulated. The TFT layer, addressable grid, or ITO segments serve as or are connected to the anode, which may be made of ITO or metal. OLEDs can be made flexible and transparent, with transparent displays being used in smartphones with optical fingerprint scanners and flexible displays being used in foldable smartphones.
History
André Bernanose and co-workers at the Nancy-Université in France made the first observations of electroluminescence in organic materials in the early 1950s. They applied high alternating voltages in air to materials such as acridine orange dye, either deposited on or dissolved in cellulose or cellophane thin films. The proposed mechanism was either direct excitation of the dye molecules or excitation of electrons.
In 1960, Martin Pope and some of his co-workers at New York University in the United States developed ohmic dark-injecting electrode contacts to organic crystals. They further described the necessary energetic requirements (work functions) for hole and electron injecting electrode contacts. These contacts are the basis of charge injection in all modern OLED devices. Pope's group also first observed direct current (DC) electroluminescence under vacuum on a single pure crystal of anthracene and on anthracene crystals doped with tetracene in 1963 using a small area silver electrode at 400 volts. The proposed mechanism was field-accelerated electron excitation of molecular fluorescence.
Pope's group reported in 1965 that in the absence of an external electric field, the electroluminescence in anthracene crystals is caused by the recombination of a thermalized electron and hole, and that the conducting level of anthracene is higher in energy than the exciton energy level. Also in 1965, Wolfgang Helfrich and W. G. Schneider of the National Research Council in Canada produced double injection recombination electroluminescence for the first time in an anthracene single crystal using hole and electron injecting electrodes, the forerunner of modern double-injection devices. In the same year, Dow Chemical researchers patented a method of preparing electroluminescent cells using high-voltage (500–1500 V) AC-driven (100–3000Hz) electrically insulated one millimetre thin layers of a melted phosphor consisting of ground anthracene powder, tetracene, and graphite powder. Their proposed mechanism involved electronic excitation at the contacts between the graphite particles and the anthracene molecules.
The first Polymer LED (PLED) to be created was by Roger Partridge at the National Physical Laboratory in the United Kingdom. It used a film of polyvinylcarbazole up to 2.2 micrometers thick located between two charge-injecting electrodes. The light generated was readily visible in normal lighting conditions though the polymer used had 2 limitations; low conductivity and the difficulty of injecting electrons. Later development of conjugated polymers would allow others to largely eliminate these problems. His contribution has often been overlooked due to the secrecy NPL imposed on the project. When it was patented in 1974 it was given a deliberately obscure "catch all" name while the government's Department for Industry tried and failed to find industrial collaborators to fund further development.
Practical OLEDs
Chemists Ching Wan Tang and Steven Van Slyke at Eastman Kodak built the first practical OLED device in 1987. This device used a two-layer structure with separate hole transporting and electron transporting layers such that recombination and light emission occurred in the middle of the organic layer; this resulted in a reduction in operating voltage and improvements in efficiency.
Research into polymer electroluminescence culminated in 1990, with J. H. Burroughes at the Cavendish Laboratory at Cambridge University, UK, reporting a high-efficiency green light-emitting polymer-based device using 100nm thick films of poly(p-phenylene vinylene). Moving from molecular to macromolecular materials solved the problems previously encountered with the long-term stability of the organic films and enabled high-quality films to be easily made. Subsequent research developed multilayer polymers and the new field of plastic electronics and OLED research and device production grew rapidly. White OLEDs, pioneered by J. Kido et al. at Yamagata University, Japan in 1995, achieved the commercialization of OLED-backlit displays and lighting.
In 1999, Kodak and Sanyo had entered into a partnership to jointly research, develop, and produce OLED displays. They announced the world's first 2.4-inch active-matrix, full-color OLED display in September the same year. In September 2002, they presented a prototype of 15-inch HDTV format display based on white OLEDs with color filters at the CEATEC Japan.
Manufacturing of small molecule OLEDs was started in 1997 by Pioneer Corporation, followed by TDK in 2001 and Samsung-NEC Mobile Display (SNMD), which later became one of the world's largest OLED display manufacturers - Samsung Display, in 2002.
The Sony XEL-1, released in 2007, was the first OLED television. Universal Display Corporation, one of the OLED materials companies, holds a number of patents concerning the commercialization of OLEDs that are used by major OLED manufacturers around the world.
On 5 December 2017, JOLED, the successor of Sony and Panasonic's printable OLED business units, began the world's first commercial shipment of inkjet-printed OLED panels.
Working principle
A typical OLED is composed of a layer of organic materials situated between two electrodes, the anode and cathode, all deposited on a substrate. The organic molecules are electrically conductive as a result of delocalization of pi electrons caused by conjugation over part or all of the molecule. These materials have conductivity levels ranging from insulators to conductors, and are therefore considered organic semiconductors. The highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) of organic semiconductors are analogous to the valence and conduction bands of inorganic semiconductors.
Originally, the most basic polymer OLEDs consisted of a single organic layer. One example was the first light-emitting device synthesised by J. H. Burroughes et al., which involved a single layer of poly(p-phenylene vinylene). However multilayer OLEDs can be fabricated with two or more layers in order to improve device efficiency. As well as conductive properties, different materials may be chosen to aid charge injection at electrodes by providing a more gradual electronic profile, or block a charge from reaching the opposite electrode and being wasted. Many modern OLEDs incorporate a simple bilayer structure, consisting of a conductive layer and an emissive layer. Developments in OLED architecture in 2011 improved quantum efficiency (up to 19%) by using a graded heterojunction. In the graded heterojunction architecture, the composition of hole and electron-transport materials varies continuously within the emissive layer with a dopant emitter. The graded heterojunction architecture combines the benefits of both conventional architectures by improving charge injection while simultaneously balancing charge transport within the emissive region.
During operation, a voltage is applied across the OLED such that the anode is positive with respect to the cathode. Anodes are picked based upon the quality of their optical transparency, electrical conductivity, and chemical stability. A current of electrons flows through the device from cathode to anode, as electrons are injected into the LUMO of the organic layer at the cathode and withdrawn from the HOMO at the anode. This latter process may also be described as the injection of electron holes into the HOMO. Electrostatic forces bring the electrons and the holes towards each other and they recombine forming an exciton, a bound state of the electron and hole. This happens closer to the electron-transport layer part of the emissive layer, because in organic semiconductors holes are generally more mobile than electrons. The decay of this excited state results in a relaxation of the energy levels of the electron, accompanied by emission of radiation whose frequency is in the visible region. The frequency of this radiation depends on the band gap of the material, in this case the difference in energy between the HOMO and LUMO.
As electrons and holes are fermions with half integer spin, an exciton may either be in a singlet state or a triplet state depending on how the spins of the electron and hole have been combined. Statistically three triplet excitons will be formed for each singlet exciton. Decay from triplet states (phosphorescence) is spin forbidden, increasing the timescale of the transition and limiting the internal efficiency of fluorescent OLED emissive layers and devices. Phosphorescent organic light-emitting diodes (PHOLEDs) or emissive layers make use of spin–orbit interactions to facilitate intersystem crossing between singlet and triplet states, thus obtaining emission from both singlet and triplet states and improving the internal efficiency.
Indium tin oxide (ITO) is commonly used as the anode material. It is transparent to visible light and has a high work function which promotes injection of holes into the HOMO level of the organic layer. A second conductive (injection) layer is typically added, which may consist of PEDOT:PSS, as the HOMO level of this material generally lies between the work function of ITO and the HOMO of other commonly used polymers, reducing the energy barriers for hole injection. Metals such as barium and calcium are often used for the cathode as they have low work functions which promote injection of electrons into the LUMO of the organic layer. Such metals are reactive, so they require a capping layer of aluminium to avoid degradation. Two secondary benefits of the aluminum capping layer include robustness to electrical contacts and the back reflection of emitted light out to the transparent ITO layer.
Experimental research has proven that the properties of the anode, specifically the anode/hole transport layer (HTL) interface topography plays a major role in the efficiency, performance, and lifetime of organic light-emitting diodes. Imperfections in the surface of the anode decrease anode-organic film interface adhesion, increase electrical resistance, and allow for more frequent formation of non-emissive dark spots in the OLED material adversely affecting lifetime. Mechanisms to decrease anode roughness for ITO/glass substrates include the use of thin films and self-assembled monolayers. Also, alternative substrates and anode materials are being considered to increase OLED performance and lifetime. Possible examples include single crystal sapphire substrates treated with gold (Au) film anodes yielding lower work functions, operating voltages, electrical resistance values, and increasing lifetime of OLEDs.
Single carrier devices are typically used to study the kinetics and charge transport mechanisms of an organic material and can be useful when trying to study energy transfer processes. As current through the device is composed of only one type of charge carrier, either electrons or holes, recombination does not occur and no light is emitted. For example, electron only devices can be obtained by replacing ITO with a lower work function metal which increases the energy barrier of hole injection. Similarly, hole only devices can be made by using a cathode made solely of aluminium, resulting in an energy barrier too large for efficient electron injection.
Carrier balance
Balanced charge injection and transfer are required to get high internal efficiency, pure emission of luminance layer without contaminated emission from charge transporting layers, and high stability. A common way to balance charge is optimizing the thickness of the charge transporting layers but is hard to control. Another way is using the exciplex. Exciplex formed between hole-transporting (p-type) and electron-transporting (n-type) side chains to localize electron-hole pairs. Energy is then transferred to luminophore and provide high efficiency. An example of using exciplex is grafting Oxadiazole and carbazole side units in red diketopyrrolopyrrole-doped Copolymer main chain shows improved external quantum efficiency and color purity in no optimized OLED.
Material technologies
Small molecules
Organic small-molecule electroluminescent materials have the advantages of a wide variety, easy to purify, and strong chemical modifications. In order to make the luminescent materials to emit light as required, some chromophores or unsaturated groups such as alkene bonds and benzene rings will usually be introduced in the molecular structure design to change the size of the conjugation range of the material, so that the photophysical properties of the material changes. In general, the larger the range of π-electron conjugation system, the longer the wavelength of light emitted by the material. For instance, with the increase of the number of benzene rings, the fluorescence emission peak of benzene, naphthalene, anthracene, and tetracene gradually red-shifted from 283 nm to 480 nm. Common organic small molecule electroluminescent materials include aluminum complexes, anthracenes, biphenyl acetylene aryl derivatives, coumarin derivatives, and various fluorochromes. Efficient OLEDs using small molecules were first developed by Ching W. Tang et al. at Eastman Kodak. The term OLED traditionally refers specifically to this type of device, though the term SM-OLED is also in use.
Molecules commonly used in OLEDs include organometallic chelates (for example Alq3, used in the organic light-emitting device reported by Tang et al.), fluorescent and phosphorescent dyes and conjugated dendrimers. A number of materials are used for their charge transport properties, for example triphenylamine and derivatives are commonly used as materials for hole transport layers. Fluorescent dyes can be chosen to obtain light emission at different wavelengths, and compounds such as perylene, rubrene and quinacridone derivatives are often used. Alq3 has been used as a green light emitter, electron transport material and as a host for yellow light and red light emitting dyes.
Because of the structural flexibility of small-molecule electroluminescent materials, thin films can be prepared by vacuum vapor deposition, which is more expensive and of limited use for large-area devices. The vacuum coating system, however, can make the entire process from film growth to OLED device preparation in a controlled and complete operating environment, helping to obtain uniform and stable films, thus ensuring the final fabrication of high-performance OLED devices.However, small molecule organic dyes are prone to fluorescence quenching in the solid state, resulting in lower luminescence efficiency. The doped OLED devices are also prone to crystallization, which reduces the luminescence and efficiency of the devices. Therefore, the development of devices based on small-molecule electroluminescent materials is limited by high manufacturing costs, poor stability, short life, and other shortcomings. Coherent emission from a laser dye-doped tandem SM-OLED device, excited in the pulsed regime, has been demonstrated. The emission is nearly diffraction limited with a spectral width similar to that of broadband dye lasers.
Researchers report luminescence from a single polymer molecule, representing the smallest possible organic light-emitting diode (OLED) device. Scientists will be able to optimize substances to produce more powerful light emissions. Finally, this work is a first step towards making molecule-sized components that combine electronic and optical properties. Similar components could form the basis of a molecular computer.
Polymer light-emitting diodes
Polymer light-emitting diodes (PLED, P-OLED), also light-emitting polymers (LEP), involve an electroluminescent conductive polymer that emits light when connected to an external voltage. They are used as a thin film for full-spectrum colour displays. Polymer OLEDs are quite efficient and require a relatively small amount of power for the amount of light produced.
Vacuum deposition is not a suitable method for forming thin films of polymers. If the polymeric OLED films are made by vacuum vapor deposition, the chain elements will be cut off and the original photophysical properties will be compromised. However, polymers can be processed in solution, and spin coating is a common method of depositing thin polymer films. This method is more suited to forming large-area films than thermal evaporation. No vacuum is required, and the emissive materials can also be applied on the substrate by a technique derived from commercial inkjet printing. However, as the application of subsequent layers tends to dissolve those already present, formation of multilayer structures is difficult with these methods. The metal cathode may still need to be deposited by thermal evaporation in vacuum. An alternative method to vacuum deposition is to deposit a Langmuir-Blodgett film.
Typical polymers used in PLED displays include derivatives of poly(p-phenylene vinylene) and polyfluorene. Substitution of side chains onto the polymer backbone may determine the colour of emitted light or the stability and solubility of the polymer for performance and ease of processing.
While unsubstituted poly(p-phenylene vinylene) (PPV) is typically insoluble, a number of PPVs and related poly(naphthalene vinylene)s (PNVs) that are soluble in organic solvents or water have been prepared via ring opening metathesis polymerization. These water-soluble polymers or conjugated poly electrolytes (CPEs) also can be used as hole injection layers alone or in combination with nanoparticles like graphene.
Phosphorescent materials
Phosphorescent organic light-emitting diodes use the principle of electrophosphorescence to convert electrical energy in an OLED into light in a highly efficient manner, with the internal quantum efficiencies of such devices approaching 100%. PHOLEDs can be deposited using vacuum deposition through a shadow mask.
Typically, a polymer such as poly(N-vinylcarbazole) is used as a host material to which an organometallic complex is added as a dopant. Iridium complexes such as Ir(mppy)3 as of 2004 were a focus of research, although complexes based on other heavy metals such as platinum have also been used.
The heavy metal atom at the centre of these complexes exhibits strong spin-orbit coupling, facilitating intersystem crossing between singlet and triplet states. By using these phosphorescent materials, both singlet and triplet excitons will be able to decay radiatively, hence improving the internal quantum efficiency of the device compared to a standard OLED where only the singlet states will contribute to emission of light.
Applications of OLEDs in solid state lighting require the achievement of high brightness with good CIE coordinates (for white emission). The use of macromolecular species like polyhedral oligomeric silsesquioxanes (POSS) in conjunction with the use of phosphorescent species such as Ir for printed OLEDs have exhibited brightnesses as high as 10,000cd/m2.
Device architectures
Structure
Bottom emission
The bottom-emission organic light-emitting diode (BE-OLED) is the architecture that was used in the early-stage AMOLED displays. It had a transparent anode fabricated on a glass substrate, and a shiny reflective cathode. Light is emitted from the transparent anode direction. To reflect all the light towards the anode direction, a relatively thick metal cathode such as aluminum is used. For the anode, high-transparency indium tin oxide (ITO) was a typical choice to emit as much light as possible. Organic thin-films, including the emissive layer that actually generates the light, are then sandwiched between the ITO anode and the reflective metal cathode. The downside of bottom emission structure is that the light has to travel through the pixel drive circuits such as the thin film transistor (TFT) substrate, and the area from which light can be extracted is limited and the light emission efficiency is reduced.
Top emission
An alternative configuration is to switch the mode of emission. A reflective anode, and a transparent (or more often semi-transparent) cathode are used so that the light emits from the cathode side, and this configuration is called top-emission OLED (TE-OLED). Unlike BEOLEDs where the anode is made of transparent conductive ITO, this time the cathode needs to be transparent, and the ITO material is not an ideal choice for the cathode because of a damage issue due to the sputtering process. Thus, a thin metal film such as pure Ag and the Mg:Ag alloy are used for the semi-transparent cathode due to their high transmittance and high conductivity. In contrast to the bottom emission, light is extracted from the opposite side in top emission without the need of passing through multiple drive circuit layers. Thus, the light generated can be extracted more efficiently.
Improvements
Deuterium
Using deuterium instead of hydrogen, in other words deuterated compounds, in red light , green light , blue light and white light OLED light emitting material layers and other layers nearby in OLED displays can improve their brightness by up to 30%. This is achieved by improving the current handling capacity, and lifespan of these materials.
Micro Lens Array (MLA)
Making indentations shaped like lenses on a transparent layer through which light passes from an OLED light emitting material, reduces the amount of scattered light within the display and directs it forward, improving brightness.
Micro-cavity theory
When light waves meet while traveling along the same medium, wave interference occurs. This interference can be constructive or destructive. It is sometimes desirable for several waves of the same frequency to sum up into a wave with higher amplitudes.
Since both electrodes are reflective in TEOLED, light reflections can happen within the diode, and they cause more complex interferences than those in BEOLEDs. In addition to the two-beam interference, there exists a multi-resonance interference between two electrodes. Because the structure of TEOLEDs is similar to that of the Fabry-Perot resonator or laser resonator, which contains two parallel mirrors comparable to the two reflective electrodes), this effect is especially strong in TEOLED. This two-beam interference and the Fabry-Perot interferences are the main factors in determining the output spectral intensity of OLED. This optical effect is called the "micro-cavity effect."
In the case of OLED, that means the cavity in a TEOLED could be especially designed to enhance the light output intensity and color purity with a narrow band of wavelengths, without consuming more power. In TEOLEDs, the microcavity effect commonly occurs, and when and how to restrain or make use of this effect is indispensable for device design. To match the conditions of constructive interference, different layer thicknesses are applied according to the resonance wavelength of that specific color. The thickness conditions are carefully designed and engineered according to the peak resonance emitting wavelengths of the blue light (460 nm), green light (530 nm), and red light (610 nm) color LEDs. This technology greatly improves the light-emission efficiency of OLEDs, and are able to achieve a wider color gamut due to high color purity.
Color filters
In "white + color filter method", also known as WOLED, red, green, and blue emissions are obtained from the same white-light LEDs using different color filters. With this method, the OLED materials produce white light, which is then filtered to obtain the desired RGB colors. This method eliminated the need to deposit three different organic emissive materials side by side, so only one kind of OLED material per layer is used to produce white light. It also eliminated the uneven degradation rate of blue pixels vs. red and green pixels. Disadvantages of this method are low color purity and contrast. Also, the filters absorb most of the emitted light, requiring the background white light to be relatively strong to compensate for the drop in brightness, and thus the power consumption for such displays can be higher.
Color filters can also be implemented into bottom- and top-emission OLEDs. By adding the corresponding RGB color filters after the semi-transparent cathode, even purer wavelengths of light can be obtained. The use of a microcavity in top-emission OLEDs with color filters also contributes to an increase in the contrast ratio by reducing the reflection of incident ambient light. In a conventional panel, a circular polarizer was installed on the panel surface. While this was provided to prevent the reflection of ambient light, it also reduced the light output. By replacing this polarizing layer with color filters, the light intensity is not affected, and essentially all ambient reflected light can be cut, allowing a better contrast on the display panel. This potentially reduced the need for brighter pixels and can lower the power consumption.
Other architectures
Transparent OLEDs
Transparent OLEDs use transparent or semi-transparent contacts on both sides of the device to create displays that can be made to be both top and bottom emitting (transparent). TOLEDs can greatly improve contrast, making it much easier to view displays in bright sunlight. This technology can be used in Head-up displays, smart windows or augmented reality applications.
Graded heterojunction
Graded heterojunction OLEDs gradually decrease the ratio of electron holes to electron transporting chemicals. This results in almost double the quantum efficiency of existing OLEDs.
Stacked OLEDs
Stacked OLEDs use a pixel architecture that stacks the red, green, and blue subpixels on top of one another instead of next to one another, leading to substantial increase in gamut and color depth, and greatly reducing pixel gap. Other display technologies with RGB (and RGBW) pixels mapped next to each other, tend to decrease potential resolution.
Tandem OLEDs are similar but have 2 layers of the same color stacked together. This improves the brightness of OLED displays.
Inverted OLED
In contrast to a conventional OLED, in which the anode is placed on the substrate, an inverted OLED uses a bottom cathode that can be connected to the drain end of an n-channel TFT, especially for the low-cost amorphous silicon TFT backplane useful in the manufacturing of AMOLED displays.
All OLED displays (passive and active matrix) use a driver IC, often mounted using the chip-on-glass (COG) technology with an anisotropic conductive film.
Color patterning technologies
Shadow mask patterning method
The most commonly used patterning method for organic light-emitting displays is shadow masking during film deposition, also called the "RGB side-by-side" method or "RGB pixelation" method. Metal sheets with multiple apertures made of low thermal expansion material, such as nickel alloy, are placed between the heated evaporation source and substrate, so that the organic or inorganic material from the evaporation source is masked off, or blocked by the sheet from reaching the substrate in most locations, so the materials are deposited only on the desired locations on the substrate, and the rest is deposited and remains on the sheet. Almost all small OLED displays for smartphones have been manufactured using this method.
Fine metal masks (FMMs) made by photochemical machining, reminiscent of old CRT shadow masks, are used in this process. The dot density of the mask will determine the pixel density of the finished display. Fine Hybrid Masks (FHMs) are lighter than FFMs, reducing bending caused by the mask's own weight, and are made using an electroforming process.
This method requires heating the electroluminescent materials at 300 °C using a thermal method in a high vacuum of 10Pa. An oxygen meter ensures that no oxygen enters the chamber as it could damage (through oxidation) the electroluminescent material, which is in powder form. The mask is aligned with the mother substrate before every use, and it is placed just below the substrate. The substrate and mask assembly are placed at the top of the deposition chamber. Afterwards, the electrode layer is deposited, by subjecting silver and aluminum powder to 1000 °C, using an electron beam. Shadow masks allow for high pixel densities of up to . High pixel densities are
necessary for virtual reality headsets.
White + color filter method (WOLED)
Although the shadow-mask patterning method is a mature technology used from the first OLED manufacturing, it causes many issues like dark spot formation due to mask-substrate contact or misalignment of the pattern due to the deformation of shadow mask. Such defect formation can be regarded as trivial when the display size is small, however it causes serious issues when a large display is manufactured, which brings significant production yield loss. To circumvent such issues, white emission devices with 4-sub-pixel color filters (white, red, green and blue) have been used for large televisions. In spite of the light absorption by the color filter, state-of-the-art OLED televisions can reproduce color very well, such as 100% NTSC, and consume little power at the same time. This is done by using an emission spectrum with high human-eye sensitivity, special color filters with a low spectrum overlap, and performance tuning with color statistics into consideration.
This approach is also called the "Color-by-white" method.
Other color patterning approaches
There are other types of emerging patterning technologies to increase the manufacturability of OLEDs.
Patternable organic light-emitting devices use a light or heat activated electroactive layer. A latent material (PEDOT-TMA) is included in this layer that, upon activation, becomes highly efficient as a hole injection layer. Using this process, light-emitting devices with arbitrary patterns can be prepared.
Colour patterning can be accomplished by means of a laser, such as a radiation-induced sublimation transfer (RIST).
Organic vapour jet printing (OVJP) uses an inert carrier gas, such as argon or nitrogen, to transport evaporated organic molecules (as in organic vapour phase deposition). The gas is expelled through a micrometre-sized nozzle or nozzle array close to the substrate as it is being translated. This allows printing arbitrary multilayer patterns without the use of solvents.
Like ink jet material deposition, inkjet etching (IJE) deposits precise amounts of solvent onto a substrate designed to selectively dissolve the substrate material and induce a structure or pattern. Inkjet etching of polymer layers in OLEDs can be used to increase the overall out-coupling efficiency. In OLEDs, light produced from the emissive layers of the OLED is partially transmitted out of the device and partially trapped inside the device by total internal reflection (TIR). This trapped light is wave-guided along the interior of the device until it reaches an edge where it is dissipated by either absorption or emission. Inkjet etching can be used to selectively alter the polymeric layers of OLED structures to decrease overall TIR and increase out-coupling efficiency of the OLED. Compared to a non-etched polymer layer, the structured polymer layer in the OLED structure from the IJE process helps to decrease the TIR of the OLED device. IJE solvents are commonly organic instead of water-based due to their non-acidic nature and ability to effectively dissolve materials at temperatures under the boiling point of water.
Transfer-printing is an emerging technology to assemble large numbers of parallel OLED and AMOLED devices efficiently. It takes advantage of standard metal deposition, photolithography, and etching to create alignment marks commonly on glass or other device substrates. Thin polymer adhesive layers are applied to enhance resistance to particles and surface defects. Microscale ICs are transfer-printed onto the adhesive surface and then baked to fully cure adhesive layers. An additional photosensitive polymer layer is applied to the substrate to account for the topography caused by the printed ICs, reintroducing a flat surface. Photolithography and etching removes some polymer layers to uncover conductive pads on the ICs. Afterwards, the anode layer is applied to the device backplane to form the bottom electrode. OLED layers are applied to the anode layer with conventional vapor deposition, and covered with a conductive metal electrode layer. transfer-printing was capable to print onto target substrates up to 500mm × 400mm. This size limit needs to expand for transfer-printing to become a common process for the fabrication of large OLED/AMOLED displays.
Experimental OLED displays using conventional photolithography techniques instead of FMMs have been demonstrated, allowing for large substrate sizes (as it eliminates the need for a mask that needs to be as large as the substrate) and good yield control. Visionox has announced the use of photolithography for depositing OLED emissive materials.
Thin-film transistor backplanes
For a high resolution display like a TV, a thin-film transistor (TFT) backplane is necessary to drive the pixels correctly. As of 2019, low-temperature polycrystalline silicon (LTPS)– TFT is widely used for commercial AMOLED displays due to its superior current handling capacity over amorphous silicon (a-Si) TFTs. LTPS-TFT has variation of the performance in a display, so various compensation circuits have been reported. Due to the size limitation of the excimer laser used for LTPS, the AMOLED size was limited. To cope with the hurdle related to the panel size, amorphous-silicon/microcrystalline-silicon backplanes have been reported with large display prototype demonstrations. An indium gallium zinc oxide (IGZO) backplane can also be used. Large OLED displays usually use AOS (amporphous oxide semiconductor) TFT transistors instead, also called oxide TFTs and these are usually based on IGZO.
Many AMOLED displays use LTPO TFT transistors. These transistors offer stability at low refresh rates, and variable refresh rates, which allows for power saving displays that do not show visual artifacts.
Advantages
The different manufacturing process of OLEDs has several advantages over flat panel displays made with LCD technology.
Lower cost in the future OLEDs can be printed onto any suitable substrate by an inkjet printer or even by screen printing, theoretically making them cheaper to produce than LCD or plasma displays. However, fabrication of the OLED substrate as of 2018 is costlier than that for TFT LCDs. Roll-to-roll vapor-deposition methods for organic devices do allow mass production of thousands of devices per minute for minimal cost; however, this technique also induces problems: devices with multiple layers can be challenging to make because of registration — lining up the different printed layers to the required degree of accuracy.
Lightweight and flexible plastic substrates OLED displays can be fabricated on flexible plastic substrates, leading to the possible fabrication of flexible organic light-emitting diodes for other new applications, such as roll-up displays embedded in fabrics or clothing. If a substrate like polyethylene terephthalate (PET) can be used, the displays may be produced inexpensively. Furthermore, plastic substrates are shatter-resistant, unlike the glass displays used in LCD devices. Flexible OLED displays are made on polyamide plastic films which are bonded to glass panels during production. Once the OLED display is encapsulated, a Laser is used to separate the plastic from the glass in a Laser Lift-Off (LLO) process.
Power efficiency LCDs filter the light emitted from a backlight, allowing a small fraction of light through. Thus, they cannot show true black. However, an inactive OLED element does not produce light or consume power, allowing true blacks. Removing the backlight also makes OLEDs lighter because some substrates are not needed.
Response time OLEDs also have a much faster response time than an LCD. Using response time compensation technologies, the fastest modern LCDs can reach response times as low as 1ms for their fastest color transition, and are capable of refresh frequencies as high as 240Hz. According to LG, OLED response times are up to 1,000 times faster than LCD, putting conservative estimates at under 10μs (0.01ms), which could theoretically accommodate refresh frequencies approaching 100kHz (100,000Hz). Due to their extremely fast response time, OLED displays can also be easily designed to be strobed, creating an effect similar to CRT flicker in order to avoid the sample-and-hold behavior seen on both LCDs and some OLED displays, which creates the perception of motion blur.
High dynamic range support Because OLEDs can turn off individual pixels showing true black, the contrast ratio of an OLED display can be very large, which allows for representation of high dynamic range (HDR) images and video at high quality. Data must be encoded with a HDR format to display in HDR, and HDR format support varies by OLED display. Maximum (peak) brightness also varies by OLED display, which impacts the dynamic range that can be represented.
Disadvantages
Lifespan
The biggest technical problem for OLEDs is the limited lifetime of the organic materials. One 2008 technical report on an OLED TV panel found that after 1,000hours, the blue luminance degraded by 12%, the red by 7% and the green by 8%. In particular, blue OLEDs at that time had a lifetime of around 14,000hours to half original brightness (five years at eight hours per day) when used for flat-panel displays. This is lower than the typical lifetime of LCD, LED or PDP technology; each rated for about 25,000–40,000hours to half brightness, depending on manufacturer and model. One major challenge for OLED displays is the formation of dark spots due to the ingress of oxygen and moisture, which degrades the organic material over time whether or not the display is powered. In 2016, LG Electronics reported an expected lifetime of 100,000 hours, up from 36,000 hours in 2013. A US Department of Energy paper shows that the expected lifespans of OLED lighting products goes down with increasing brightness, with an expected lifespan of 40,000 hours at 25% brightness, or 10,000 hours at 100% brightness. Compared to LCDs, OLEDs may be more susceptible to screen burn-in and/or brightness degradation.
Degradation
Degradation occurs because of the accumulation of nonradiative recombination centers and luminescence quenchers in the emissive zone. It is said that the chemical breakdown in the semiconductors occurs in four steps:
recombination of charge carriers through the absorption of UV light
homolytic dissociation
subsequent radical addition reactions that form radicals
disproportionation between two radicals resulting in hydrogen-atom transfer reactions
In 2007, experimental OLEDs were created which can sustain 400cd/m2 of luminance for over 198,000hours for green OLEDs and 62,000hours for blue OLEDs. In 2012, OLED lifetime to half of the initial brightness was improved to 900,000hours for red, 1,450,000hours for yellow and 400,000hours for green at an initial luminance of 1,000cd/m2. Proper encapsulation is critical for prolonging an OLED display's lifetime, as the OLED light emitting electroluminescent materials are sensitive to oxygen and moisture. When exposed to moisture or oxygen, the electroluminescent materials in OLEDs degrade as they oxidize, generating black spots and reducing or shrinking the area that emits light, reducing light output. This reduction can occur in a pixel by pixel basis. This can also lead to delamination of the electrode layer, eventually leading to complete panel failure.
Degradation occurs three orders of magnitude faster when exposed to moisture than when exposed to oxygen. Encapsulation can be performed by applying an epoxy adhesive with dessicant, by laminating a glass sheet with epoxy glue and dessicant followed by vacuum degassing, or by using Thin-Film Encapsulation (TFE), which is a multi-layer coating of alternating organic and inorganic layers. The organic layers are applied using inkjet printing, and the inorganic layers are applied using Atomic Layer Deposition (ALD). The encapsulation process is carried out under a nitrogen environment, using UV-curable LOCA glue and the electroluminescent and electrode material deposition processes are carried out under a high vacuum. The encapsulation and material deposition processes are carried out by a single machine, after the Thin-film transistors have been applied. The transistors are applied in a process that is the same for LCDs. The electroluminescent materials can also be applied using inkjet printing.
Color balance
The OLED material used to produce blue light degrades much more rapidly than the materials used to produce other colors; in other words, blue light output will decrease relative to the other colors of light. This variation in the differential color output will change the color balance of the display, and is much more noticeable than a uniform decrease in overall luminance. This can be avoided partially by adjusting the color balance, but this may require advanced control circuits and input from a knowledgeable user. More commonly, though, manufacturers optimize the size of the R, G and B subpixels to reduce the current density through the subpixel in order to equalize lifetime at full luminance. For example, a blue subpixel may be 75% larger than the green subpixel. The red subpixel may be 10% larger than the green.
Efficiency of blue OLEDs
Improvements to the efficiency and lifetime of blue OLEDs is vital to the success of OLEDs as replacements for LCD technology. Considerable research has been invested in developing blue OLEDs with high external quantum efficiency, as well as a deeper blue color.
Since 2012, research focuses on organic materials exhibiting thermally activated delayed fluorescence (TADF), discovered at Kyushu University OPERA and UC Santa Barbara CPOS. TADF would allow stable and high-efficiency solution processable (meaning that the organic materials are layered in solutions producing thinner layers) blue emitters, with internal quantum efficiencies reaching 100%. Early in 2017, TADF materials based on oxygen-based fully bridged boron-type electron accepttors had achieved huge breakthrough in their proprities. The external quantum efficiency of TADF-OLED for blue and green light had achieved 38%, with thin full-width half-maximum and high color purity. In 2022, Han et al. synthesized a new D-A type luminescent material, TDBA-Cz, and used the m-AC-DBNA synthesized by Meng et al. as a control to investigate the effect of the substitution site of the carbazole unit as an electron donor on the oxygen-bridged triphenylboron electron acceptor unit on the photophysical properties of the overall molecule. It was found that the introduction of two carbazole units into the same benzene ring of the oxygen-bridged triphenylboron electron acceptor unit could effectively suppress the conformational relaxation of the molecule during the radiative transition, resulting in narrow bandwidth blue light emission. In addition, TDBA-Cz is the first reported blue material to achieve both a FWHM down to 45 nm and a maximum EQE of 21.4% in a non-doped TADF-OLED.
Blue TADF emitters are expected to market by 2020 and would be used for WOLED displays with phosphorescent color filters, as well as blue OLED displays with ink-printed QD color filters.
Water damage
Water can instantly damage the organic materials of the displays. Therefore, improved sealing processes are important for practical manufacturing. Water damage especially may limit the longevity of more flexible displays.
Outdoor performance
As an emissive display technology, OLEDs rely completely upon converting electricity to light, unlike most LCDs which are to some extent reflective. E-paper leads the way in efficiency with ~ 33% ambient light reflectivity, enabling the display to be used without any internal light source. The metallic cathode in an OLED acts as a mirror, with reflectance approaching 80%, leading to poor readability in bright ambient light such as outdoors. However, with the proper application of a circular polarizer and antireflective coatings, the diffuse reflectance can be reduced to less than 0.1%. With 10,000 fc incident illumination (typical test condition for simulating outdoor illumination), that yields an approximate photopic contrast of 5:1. Advances in OLED technologies, however, enable OLEDs to become actually better than LCDs in bright sunlight. The AMOLED display in the Galaxy S5, for example, was found to outperform all LCD displays on the market in terms of power usage, brightness and reflectance.
Power consumption
While an OLED will consume around 40% of the power of an LCD displaying an image that is primarily black, for the majority of images it will consume 60–80% of the power of an LCD. However, an OLED can use more than 300% power to display an image with a white background, such as a document or web site. This can lead to reduced battery life in mobile devices when white backgrounds are used.
Screen flicker
Many OLEDs use pulse width modulation to display colour/brightness gradations. For example, a pixel instructed to display gray will flicker on and off rapidly, creating a subtle strobe effect. The alternative way to decrease brightness would be to decrease power to the display, which would eliminate screen flicker to the detriment of colour balance, which deteriorates as brightness decreases. However, use of PWM gradations may be more harmful for eye health.
Manufacturers and commercial uses
Almost all OLED manufacturers rely on material deposition equipment that is only made by a handful of companies, the most notable one being Canon Tokki, a unit of Canon Inc. although Ulvac and Sunic System are also notable. Canon Tokki is reported to have a near-monopoly of the giant OLED-manufacturing vacuum machines, notable for their size. Apple has relied solely on Canon Tokki in its bid to introduce its own OLED displays for the iPhones released in 2017. The electroluminescent materials needed for OLEDs are also made by a handful of companies, some of them being Merck, Universal Display Corporation and LG Chem. The machines that apply these materials can operate continuously for 5–6 days, and can process a mother substrate in 5 minutes.
OLED displays are mainly made by Samsung Display and LG Display. OLED technology is used in commercial applications such as displays for mobile phones and portable digital media players, car radios and digital cameras among others, as well as lighting. Such portable display applications favor the high light output of OLEDs for readability in sunlight and their low power drain. Portable displays are also used intermittently, so the lower lifespan of organic displays is less of an issue. Prototypes have been made of flexible and rollable displays which use OLEDs' unique characteristics. Applications in flexible signs and lighting are also being developed. OLED lighting offers several advantages over LED lighting, such as higher quality illumination, more diffuse light source, and panel shapes. Philips Lighting has made OLED lighting samples under the brand name "Lumiblade" available online and Novaled AG based in Dresden, Germany, introduced a line of OLED desk lamps called "Victory" in September, 2011.
Nokia introduced OLED mobile phones including the N85 and the N86 8MP, both of which feature an AMOLED display. OLEDs have also been used in most Motorola and Samsung color cell phones, as well as some HTC, LG and Sony Ericsson models. OLED technology can also be found in digital media players such as the Creative ZEN V, the iriver clix, the Zune HD and the Sony Walkman X Series.
The Google and HTC Nexus One smartphone includes an AMOLED screen, as does HTC's own Desire and Legend phones. However, due to supply shortages of the Samsung-produced displays, certain HTC models will use Sony's SLCD displays in the future, while the Google and Samsung Nexus S smartphone will use "Super Clear LCD" instead in some countries.
OLED displays were used in watches made by Fossil (JR-9465) and Diesel (DZ-7086). Other manufacturers of OLED panels include Anwell Technologies Limited (Hong Kong), AU Optronics (Taiwan), Chimei Innolux Corporation (Taiwan), LG (Korea), and others.
DuPont stated in a press release in May 2010, that they can produce a 50-inch OLED TV in two minutes with a new printing technology. If this can be scaled up in terms of manufacturing, then the total cost of OLED TVs would be greatly reduced. DuPont also states that OLED TVs made with this less expensive technology can last up to 15 years if left on for a normal eight-hour day.
The use of OLEDs may be subject to patents held by Universal Display Corporation, Eastman Kodak, DuPont, General Electric, Royal Philips Electronics, numerous universities and others. By 2008, thousands of patents associated with OLEDs, came from larger corporations and smaller technology companies.
Flexible OLED displays have been used by manufacturers to create curved displays such as the Galaxy S7 Edge but they were not in devices that can be flexed by the users. Samsung demonstrated a roll-out display in 2016.
On 31 October 2018, Royole, a Chinese electronics company, unveiled the world's first foldable screen phone featuring a flexible OLED display. On 20 February 2019, Samsung announced the Samsung Galaxy Fold with a foldable OLED display from Samsung Display, its majority-owned subsidiary. At MWC 2019 on 25 February 2019, Huawei announced the Huawei Mate X featuring a foldable OLED display from BOE.
The 2010s also saw the wide adoption of tracking gate-line in pixel (TGP), which moves the driving circuitry from the borders of the display to in between the display's pixels, allowing for narrow bezels.
In 2023 the German startup Inuru has announced to manufacture low-cost OLED with printing for packaging and fashion applications.
Fashion
Textiles incorporating OLEDs are an innovation in the fashion world and pose for a way to integrate lighting to bring inert objects to a whole new level of fashion. The hope is to combine the comfort and low cost properties of textile with the OLEDs properties of illumination and low energy consumption. Although this scenario of illuminated clothing is highly plausible, challenges are still a road block. Some issues include: the lifetime of the OLED, rigidness of flexible foil substrates, and the lack of research in making more fabric like photonic textiles.
Automotive
The number of automakers using OLEDs is still rare and limited to the high-end of the market. For example, the 2010 Lexus RX features an OLED display instead of a thin film transistor (TFT-LCD) display.
A Japanese manufacturer Pioneer Electronic Corporation produced the first car stereos with a monochrome OLED display, which was also the world's first OLED product. The Aston Martin DB9 incorporated the world's first automotive OLED display, which was manufactured by Yazaki, followed by the 2004 Jeep Grand Cherokee and the Chevrolet Corvette C6. The 2015 Hyundai Sonata and Kia Soul EV use a 3.5-inch white PMOLED display.
Company-specific applications
Samsung
By 2004, Samsung Display, a subsidiary of South Korea's largest conglomerate and a former Samsung-NEC joint venture, was the world's largest OLED manufacturer, producing 40% of the OLED displays made in the world, and as of 2010, has a 98% share of the global AMOLED market. The company is leading the world of OLED industry, generating $100.2million out of the total $475million revenues in the global OLED market in 2006. As of 2006, it held more than 600 American patents and more than 2800 international patents, making it the largest owner of AMOLED technology patents.
Samsung SDI announced in 2005, the world's largest OLED TV at the time, at . This OLED featured the highest resolution at the time, of 6.22million pixels. In addition, the company adopted active matrix-based technology for its low power consumption and high-resolution qualities. This was exceeded in January 2008, when Samsung showcased the world's largest and thinnest OLED TV at the time, at 31inches (78cm) and 4.3mm.
In May 2008, Samsung unveiled an ultra-thin 12.1inch (30cm) laptop OLED display concept, with a 1,280×768 resolution with infinite contrast ratio. According to Woo Jong Lee, Vice President of the Mobile Display Marketing Team at Samsung SDI, the company expected OLED displays to be used in notebook PCs as soon as 2010.
In October 2008, Samsung showcased the world's thinnest OLED display, also the first to be "flappable" and bendable. It measures just 0.05mm (thinner than paper), yet a Samsung staff member said that it is "technically possible to make the panel thinner". To achieve this thickness, Samsung etched an OLED panel that uses a normal glass substrate. The drive circuit was formed by low-temperature polysilicon TFTs. Also, low-molecular organic EL materials were employed. The pixel count of the display is 480 × 272. The contrast ratio is 100,000:1, and the luminance is 200cd/m2. The colour reproduction range is 100% of the NTSC standard.
At the Consumer Electronics Show (CES) in January 2010, Samsung demonstrated a laptop computer with a large, transparent OLED display featuring up to 40% transparency and an animated OLED display in a photo ID card.
Samsung's 2010 AMOLED smartphones used their Super AMOLED trademark, with the Samsung Wave S8500 and Samsung i9000 Galaxy S being launched in June 2010. In January 2011, Samsung announced their Super AMOLED Plus displays, which offer several advances over the older Super AMOLED displays: real stripe matrix (50% more sub pixels), thinner form factor, brighter image and an 18% reduction in energy consumption.
At CES 2012, Samsung introduced the first 55" TV screen that uses Super OLED technology.
On 8 January 2013, at CES Samsung unveiled a unique curved 4K Ultra S9 OLED television, which they state provides an "IMAX-like experience" for viewers.
On 13 August 2013, Samsung announced availability of a 55-inch curved OLED TV (model KN55S9C) in the US at a price point of $8999.99.
On 6 September 2013, Samsung launched its 55-inch curved OLED TV (model KE55S9C) in the United Kingdom with John Lewis.
Samsung introduced the Galaxy Round smartphone in the Korean market in October 2013. The device features a 1080p screen, measuring , that curves on the vertical axis in a rounded case. The corporation has promoted the following advantages: A new feature called "Round Interaction" that allows users to look at information by tilting the handset on a flat surface with the screen off, and the feel of one continuous transition when the user switches between home screens.
Samsung released a new line of OLED TVs in 2022, its first using the technology since 2013. They use panels sourced from Samsung Display; previously, LG was the sole manufacturer of OLED panels for TVs.
Sony
The Sony CLIÉ PEG-VZ90 was released in 2004, being the first PDA to feature an OLED screen. Other Sony products to feature OLED screens include the MZ-RH1 portable minidisc recorder, released in 2006 and the Walkman X Series.
At the 2007, Las Vegas Consumer Electronics Show (CES), Sony showcased a , (resolution 960×540) and , full HD resolution at OLED TV models. Both claimed 1,000,000:1 contrast ratios and total thicknesses (including bezels) of 5mm. In April 2007, Sony announced it would manufacture 1000 OLED TVs per month for market testing purposes. On 1 October 2007, Sony announced that the model XEL-1, was the first commercial OLED TV and it was released in Japan in December 2007.
In May 2007, Sony publicly unveiled a video of a flexible OLED screen which is only 0.3 millimeters thick. At the Display 2008 exhibition, Sony demonstrated a 0.2mm thick display with a resolution of 320×200 pixels and a 0.3mm thick display with 960×540 pixels resolution, one-tenth the thickness of the XEL-1.
In July 2008, a Japanese government body said it would fund a joint project of leading firms, which is to develop a key technology to produce large, energy-saving organic displays. The project involves one laboratory and 10 companies including Sony Corp. NEDO said the project was aimed at developing a core technology to mass-produce 40inch or larger OLED displays in the late 2010s.
In October 2008, Sony published results of research it carried out with the Max Planck Institute over the possibility of mass-market bending displays, which could replace rigid LCDs and plasma screens. Eventually, bendable, see-through displays could be stacked to produce 3D images with much greater contrast ratios and viewing angles than existing products.
Sony exhibited a 24.5" (62cm) prototype OLED 3D television during the Consumer Electronics Show in January 2010.
In January 2011, Sony announced the PlayStation Vita handheld game console (the successor to the PSP) will feature a 5-inch OLED screen.
On 17 February 2011, Sony announced its 25" (63.5cm) OLED Professional Reference Monitor aimed at the Cinema and high end Drama Post Production market.
On 25 June 2012, Sony and Panasonic announced a joint venture for creating low cost mass production OLED televisions by 2013.
Sony unveiled its first OLED TV since 2008 at CES 2017 called A1E. It revealed two other models in 2018 one at CES 2018 called A8F and other a Master Series TV called A9F. At CES 2019 they unveiled another two models one the A8G and the other another Bravia Series TV called A9G. Then, at CES 2020, they revealed the A8H, which was effectively an A9G in terms of picture quality but with some compromises due to its lower cost. At the same event, they also revealed a 48-inch version of the A9G, making this its smallest OLED TV since the XEL-1.
LG
On 9 April 2009, LG acquired Kodak's OLED business and started to utilize white OLED technology. As of 2010, LG Electronics produced one model of OLED television, the 15EL9500 and had announced a OLED 3D television for March 2011. On 26 December 2011, LG officially announced the "world's largest OLED panel" and featured it at CES 2012. In late 2012, LG announces the launch of the 55EM9600 OLED television in Australia.
In January 2015, LG Display signed a long-term agreement with Universal Display Corporation for the supply of OLED materials and the right to use their patented OLED emitters.
As of 2022, LG produces the world's largest OLED TV, at 97 inches.
Mitsubishi
Lumiotec is the first company in the world developing and selling, since January 2011, mass-produced OLED lighting panels with such brightness and long lifetime. Lumiotec is a joint venture of Mitsubishi Heavy Industries, ROHM, Toppan Printing, and Mitsui & Co.
On 1 June 2011, Mitsubishi Electric installed a 6-meter OLED 'sphere' in Tokyo's Science Museum.
Recom Group
On 6 January 2011, Los Angeles-based technology company Recom Group introduced the first small screen consumer application of the OLED at the Consumer Electronics Show in Las Vegas. This was a 2.8" (7cm) OLED display being used as a wearable video name tag. At the Consumer Electronics Show in 2012, Recom Group introduced the world's first video mic flag incorporating three 2.8" (7cm) OLED displays on a standard broadcaster's mic flag. The video mic flag allowed video content and advertising to be shown on a broadcasters standard mic flag.
Dell
On 6 January 2016, Dell announced the Ultrasharp UP3017Q OLED monitor at the Consumer Electronics Show in Las Vegas. The monitor was announced to feature a 4K UHD OLED panel with a 120Hz refresh rate, 0.1 millisecond response time, and a contrast ratio of 400,000:1. The monitor was set to sell at a price of $4,999 and release in March, 2016, just a few months later. As the end of March rolled around, the monitor was not released to the market and Dell did not speak on reasons for the delay. Reports suggested that Dell canceled the monitor as the company was unhappy with the image quality of the OLED panel, especially the amount of color drift that it displayed when you viewed the monitor from the sides. On 13 April 2017, Dell finally released the UP3017Q OLED monitor to the market at a price of $3,499 ($1,500 less than its original spoken price of $4,999 at CES 2016). In addition to the price drop, the monitor featured a 60Hz refresh rate and a contrast ratio of 1,000,000:1. As of June, 2017, the monitor is no longer available to purchase from Dell's website.
Apple
Apple began using OLED panels in its watches in 2015 and in its laptops in 2016 with the introduction of an OLED touchbar to the MacBook Pro. In 2017, Apple announced the introduction of their tenth anniversary iPhone X with their own optimized OLED display licensed from Universal Display Corporation. With the exception of the iPhone SE line, iPhone XR and iPhone 11, all iPhones released since then have also featured OLED displays. In 2024, Apple announced the 7th generation iPad Pro, which featured a "tandem OLED" panel in an attempt to increase the panel's brightness.
Nintendo
A third model of Nintendo's Switch, a hybrid gaming system, features an OLED panel in place of the original model's LCD panel. Announced in the summer of 2021, it was released on 8 October 2021.
Research
In 2014, Mitsubishi Chemical Corporation (MCC), a subsidiary of Mitsubishi Chemical Holdings, developed an OLED panel with a 30,000-hour life, twice that of conventional OLED panels.
The search for efficient OLED materials has been extensively supported by simulation methods; it is possible to calculate important properties computationally, independent of experimental input, making materials development cheaper.
On 18 October 2018, Samsung showed of their research roadmap at their 2018 Samsung OLED Forum. This included Fingerprint on Display (FoD), Under Panel Sensor (UPS), Haptic on Display (HoD) and Sound on Display (SoD).
Various venders are also researching cameras under OLEDs (Under Display Cameras). According to IHS Markit Huawei has partnered with BOE, Oppo with China Star Optoelectronics Technology (CSOT), Xiaomi with Visionox.
In 2020, researchers at the Queensland University of Technology (QUT) proposed using human hair which is a source of carbon and nitrogen to create OLED displays.
| Technology | Media and communication: Basics | null |
191717 | https://en.wikipedia.org/wiki/X-ray%20binary | X-ray binary | X-ray binaries are a class of binary stars that are luminous in X-rays.
The X-rays are produced by matter falling from one component, called the donor (usually a relatively common main sequence star), to the other component, called the accretor, which is either a neutron star or black hole.
The infalling matter releases gravitational potential energy, up to 30 percent of its rest mass, as X-rays. (Hydrogen fusion releases only about 0.7 percent of rest mass.) The lifetime and the mass-transfer rate in an X-ray binary depends on the evolutionary status of the donor star, the mass ratio between the stellar components, and their orbital separation.
An estimated 1041 positrons escape per second from a typical low-mass X-ray binary.
Classification
X-ray binaries are further subdivided into several (sometimes overlapping) subclasses, that perhaps reflect the underlying physics better. Note that the classification by mass (high, intermediate, low) refers to the optically visible donor, not to the compact X-ray emitting accretor.
Low-mass X-ray binaries (LMXBs)
Soft X-ray transients (SXTs)
Symbiotic X-ray binaries
Super soft X-ray sources or Super soft sources (SSXs), (SSXB)
Accreting millisecond X-ray pulsars (AMXPs)
Intermediate-mass X-ray binaries (IMXBs)
Ultracompact X-ray binaries (UCXBs)
High-mass X-ray binaries (HMXBs)
Be/X-ray binaries (BeXRBs)
Supergiant X-ray binaries (SGXBs)
Supergiant Fast X-ray Transients (SFXTs)
Others
X-ray bursters
X-ray pulsars
Microquasars (radio-jet X-ray binaries that can house either a neutron star or a black hole)
Low-mass X-ray binary
A low-mass X-ray binary (LMXB) is a binary star system where one of the components is either a black hole or neutron star. The other component, a donor, usually fills its Roche lobe and therefore transfers mass to the compact star. In LMXB systems the donor is less massive than the compact object, and can be on the main sequence, a degenerate dwarf (white dwarf), or an evolved star (red giant). Approximately two hundred LMXBs have been detected in the Milky Way, and of these, thirteen LMXBs have been discovered in globular clusters. The Chandra X-ray Observatory has revealed LMXBs in many distant galaxies.
A typical low-mass X-ray binary emits almost all of its radiation in X-rays, and typically less than one percent in visible light, so they are among the brightest objects in the X-ray sky, but relatively faint in visible light. The apparent magnitude is typically around 15 to 20. The brightest part of the system is the accretion disk around the compact object. The orbital periods of LMXBs range from ten minutes to hundreds of days.
The variability of LMXBs are most commonly observed as X-ray bursters, but can sometimes be seen in the form of X-ray pulsars. The X-ray bursters are created by thermonuclear explosions created by the accretion of Hydrogen and Helium.
Intermediate-mass X-ray binary
An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate-mass star. An intermediate-mass X-ray binary is the origin for Low-mass X-ray binary systems.
High-mass X-ray binary
A high-mass X-ray binary (HMXB) is a binary star system that is strong in X rays, and in which the normal stellar component is a massive star: usually an O or B star, a blue supergiant, or in some cases, a red supergiant or a Wolf–Rayet star.
The compact, X-ray emitting, component is a neutron star or black hole.
A fraction of the stellar wind of the massive normal star is captured by the compact object, and produces X-rays as it falls onto the compact object.
In a high-mass X-ray binary, the massive star dominates the emission of optical light, while the compact object is the dominant source of X-rays.
The massive stars are very luminous and therefore easily detected.
One of the most famous high-mass X-ray binaries is Cygnus X-1, which was the first identified black hole candidate.
Other HMXBs include Vela X-1 (not to be confused with Vela X), and 4U 1700-37.
The variability of HMXBs are observed in the form of X-ray pulsars and not X-ray bursters. These X-ray pulsars are due to the accretion of matter magnetically funneled into the poles of the compact companion. The stellar wind and Roche lobe overflow of the massive normal star accretes in such large quantities, the transfer is very unstable and creates a short lived mass transfer.
Once a HMXB has reached its end, if the periodicity of the binary was less than a year, it can become a single red giant with a neutron core or a single neutron star. With a longer periodicity, a year and beyond, the HMXB can become a double neutron star binary if uninterrupted by a supernova.
Microquasar
A microquasar (or radio emitting X-ray binary) is the smaller cousin of a quasar. Microquasars are named after quasars, as they have some common characteristics: strong and variable radio emission, often resolvable as a pair of radio jets, and an accretion disk surrounding a compact object which is either a black hole or a neutron star. In quasars, the black hole is supermassive (millions of solar masses); in microquasars, the mass of the compact object is only a few solar masses. In microquasars, the accreted mass comes from a normal star, and the accretion disk is very luminous in the optical and X-ray regions. Microquasars are sometimes called radio-jet X-ray binaries to distinguish them from other X-ray binaries. A part of the radio emission comes from relativistic jets, often showing apparent superluminal motion.
Microquasars are very important for the study of relativistic jets. The jets are formed close to the compact object, and timescales near the compact object are proportional to the mass of the compact object. Therefore, ordinary quasars take centuries to go through variations a microquasar experiences in one day.
Noteworthy microquasars include SS 433, in which atomic emission lines are visible from both jets; GRS 1915+105, with an especially high jet velocity and the very bright Cygnus X-1, detected up to the High Energy gamma rays (E > 60 MeV). Extremely high energies of particles emitting in the VHE band might be explained by several mechanisms of particle acceleration (see Fermi acceleration and Centrifugal mechanism of acceleration).
| Physical sciences | Stellar astronomy | Astronomy |
191788 | https://en.wikipedia.org/wiki/Algebra%20over%20a%20field | Algebra over a field | In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.
An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.
Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra.
Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
Definition and motivation
Motivating examples
Definition
Let be a field, and let be a vector space over equipped with an additional binary operation from to , denoted here by (that is, if and are any two elements of , then is an element of that is called the product of and ). Then is an algebra over if the following identities hold for all elements in , and all elements (often called scalars) and in :
Right distributivity:
Left distributivity:
Compatibility with scalars: .
These three axioms are another way of saying that the binary operation is bilinear. An algebra over is sometimes also called a -algebra, and is called the base field of . The binary operation is often referred to as multiplication in . The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra.
When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.
Basic concepts
Algebra homomorphisms
Given -algebras and , a homomorphism of -algebras or -algebra homomorphism is a -linear map such that for all in . If and are unital, then a homomorphism satisfying is said to be a unital homomorphism. The space of all -algebra homomorphisms between and is frequently written as
A -algebra isomorphism is a bijective -algebra homomorphism.
Subalgebras and ideals
A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L.
In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.
A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements.
x + y is in L (L is closed under addition),
cx is in L (L is closed under scalar multiplication),
z · x is in L (L is closed under left multiplication by arbitrary elements).
If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra.
This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).
Extension of scalars
If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product . So if A is an algebra over K, then is an algebra over F.
Kinds of algebras and examples
Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.
Unital algebra
An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra.
Zero algebra
An algebra is called a zero algebra if for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.
One may define a unital zero algebra by taking the direct sum of modules of a field (or more generally a ring) K and a K-vector space (or module) V, and defining the product of every pair of elements of V to be zero. That is, if and , then . If is a basis of V, the unital zero algebra is the quotient of the polynomial ring by the ideal generated by the EiEj for every pair .
An example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space.
These unital zero algebras may be more generally useful, as they allow to translate any general property of the algebras to properties of vector spaces or modules. For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.
Associative algebra
Examples of associative algebras include
the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication.
group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication.
the commutative algebra K[x] of all polynomials over K (see polynomial ring).
algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative.
Incidence algebras are built on certain partially ordered sets.
algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis.
Non-associative algebra
A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map . The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".
Examples detailed in the main article include:
Euclidean space R3 with multiplication given by the vector cross product
Octonions
Lie algebras
Jordan algebras
Alternative algebras
Flexible algebras
Power-associative algebras
Algebras and rings
The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism
where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication
given by
Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as
for all and . In other words, the following diagram commutes:
Structure coefficients
For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A.
Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws.
Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars.
These structure coefficients determine the multiplication in A via the following rule:
where e1,...,en form a basis of A.
Note however that several different sets of structure coefficients can give rise to isomorphic algebras.
In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as
eiej = ci,jkek.
If you apply this to vectors written in index notation, then this becomes
(xy)k = ci,jkxiyj.
If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.
Classification of low-dimensional unital associative algebras over the complex numbers
Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study.
There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element,
It remains to specify
for the first algebra,
for the second algebra.
There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify
for the first algebra,
for the second algebra,
for the third algebra,
for the fourth algebra,
for the fifth algebra.
The fourth of these algebras is non-commutative, and the others are commutative.
Generalization: algebra over a ring
In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space).
Associative algebras over rings
A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to , the direct product of two quaternion algebras. The center of that ring is , and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional -algebra.
In commutative algebra, if A is a commutative ring, then any unital ring homomorphism defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural -module structure, since one can take the unique homomorphism . On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
| Mathematics | Abstract algebra | null |
191884 | https://en.wikipedia.org/wiki/Headphones | Headphones | Headphones are a pair of small loudspeaker drivers worn on or around the head over a user's ears. They are electroacoustic transducers, which convert an electrical signal to a corresponding sound. Headphones let a single user listen to an audio source privately, in contrast to a loudspeaker, which emits sound into the open air for anyone nearby to hear. Headphones are also known as earphones or, colloquially, cans. Circumaural (around the ear) and supra-aural (over the ear) headphones use a band over the top of the head to hold the drivers in place. Another type, known as earbuds or earpieces, consists of individual units that plug into the user's ear canal; within that category have been developed cordless air buds using wireless technology. A third type are bone conduction headphones, which typically wrap around the back of the head and rest in front of the ear canal, leaving the ear canal open. In the context of telecommunication, a headset is a combination of a headphone and microphone.
Headphones connect to a signal source such as an audio amplifier, radio, CD player, portable media player, mobile phone, video game console, or electronic musical instrument, either directly using a cord, or using wireless technology such as Bluetooth, DECT or FM radio. The first headphones were developed in the late 19th century for use by switchboard operators, to keep their hands free. Initially, the audio quality was mediocre and a step forward was the invention of high fidelity headphones.
Headphones exhibit a range of different audio reproduction quality capabilities. Headsets designed for telephone use typically cannot reproduce sound with the high fidelity of expensive units designed for music listening by audiophiles. Headphones that use cables typically have either a or phone jack for plugging the headphones into the audio source. Some headphones are wireless, using Bluetooth connectivity to receive the audio signal by radio waves from source devices like cellphones and digital players. As a result of the Walkman effect, beginning in the 1980s, headphones started to be used in public places such as sidewalks, grocery stores, and public transit. Headphones are also used by people in various professional contexts, such as audio engineers mixing sound for live concerts or sound recordings and DJs, who use headphones to cue up the next song without the audience hearing, aircraft pilots and call center employees. The latter two types of employees use headphones with an integrated microphone.
History
Headphones grew out of the need to free up a person's hands when operating a telephone. By the 1880s, soon after the invention of the telephone, telephone switchboard operators began to use head apparatuses to mount the telephone receiver. The receiver was mounted on the head by a clamp which held it next to the ear. The head mount freed the switchboard operator's hands, so that they could easily connect the wires of the telephone callers and receivers. The head-mounted telephone receiver in the singular form was called a headphone. These head-mounted phone receivers, unlike modern headphones, only had one earpiece.
By the 1890s a listening device with two earpieces was developed by the British company Electrophone. The device created a listening system through the phone lines that allowed the customer to connect into live feeds of performances at theaters and opera houses across London. Subscribers to the service could listen to the performance through a pair of massive earphones that connected below the chin and were held by a long rod.
French engineer Ernest Mercadier in 1891 patented a set of in-ear headphones. The German company Siemens Brothers at this time was also selling headpieces for telephone operators which had two earpieces, although placed outside the ear. The Siemens Brothers headpieces looked similar to modern headphones. The majority of headgear used by telephone operators continued to have only one earpiece.
Headphones appeared in the emerging field of wireless telegraphy, which was the beginning stage of radio broadcasting. Some early wireless telegraph developers chose to use the telephone receiver's speaker as the detector for the electrical signal of the wireless receiving circuit. By 1902 wireless telegraph innovators, such as Lee de Forest, were using two jointly head-mounted telephone receivers to hear the signal of the receiving circuit. The two head-mounted telephone receivers were called in the singular form head telephones. By 1908 the headpiece began to be written simply as head phones, and a year later the compound word headphones began to be used.
One of the earliest companies to make headphones for wireless operators was the Holtzer-Cabot Company in 1909. They were also makers of head receivers for telephone operators and normal telephone receivers for the home. Another early manufacturer of headphones was Nathaniel Baldwin. He was the first major supplier of headsets to the U.S. Navy. In 1910, motivated by his inability to hear sermons during Sunday service, he invented a prototype telephone headset. He offered it for testing to the navy, which promptly ordered 100 of them. Wireless Specialty Apparatus Co., in partnership with Baldwin Radio Company, set up a manufacturing facility in Utah to fulfill orders. These early headphones used moving iron drivers, with either single-ended or balanced armatures. The common single-ended type used voice coils wound around the poles of a permanent magnet, which were positioned close to a flexible steel diaphragm. The audio current through the coils varied the magnetic field of the magnet, exerting a varying force on the diaphragm, causing it to vibrate, creating sound waves. The requirement for high sensitivity meant that no damping was used, so the frequency response of the diaphragm had large peaks due to resonance, resulting in poor sound quality. These early models lacked padding, and were often uncomfortable to wear for long periods. Their impedance varied; headphones used in telegraph and telephone work had an impedance of 75 ohms. Those used with early wireless radio had more turns of finer wire to increase sensitivity. Impedances of 1,000 to 2,000 ohms was common, which suited both crystal sets and triode receivers. Some very sensitive headphones, such as those manufactured by Brandes around 1919, were commonly used for early radio work.
In 1958, John C. Koss, an audiophile and jazz musician from Milwaukee, produced the first stereo headphones.
Smaller earbud-type earpieces, which plugged into the user's ear canal, were first developed for hearing aids. They became widely used with transistor radios, which commercially appeared in 1954 with the introduction of the Regency TR-1. The most popular audio device in history, the transistor radio changed listening habits, allowing people to listen to the radio anywhere. The earbud uses either a moving iron driver or a piezoelectric crystal to produce sound. The 3.5 mm radio and phone connector, which is the most commonly used in portable applications today, has been used at least since the Sony EFM-117J transistor radio, which was released in 1964. Its popularity was reinforced by its use on the Walkman portable tape player in 1979.
Applications
Wired headphones may be used with stationary CD and DVD players, home theater, personal computers, or portable devices (e.g., digital audio player/MP3 player, mobile phone), as long as these devices are equipped with a headphone jack. Cordless headphones are not connected to their source by a cable. Instead, they receive a radio or infrared signal encoded using a radio or infrared transmission link, such as FM, Bluetooth or Wi-Fi. These are battery-powered receiver systems, of which the headphone is only a component. Cordless headphones are used with events such as a silent disco.
In the professional audio sector, headphones are used in live situations by disc jockeys with a DJ mixer, and sound engineers for monitoring signal sources. In radio studios, DJs use a pair of headphones when talking to the microphone while the speakers are turned off to eliminate acoustic feedback while monitoring their own voice. In studio recordings, musicians and singers use headphones to play or sing along to a backing track or band. In military applications, audio signals of many varieties are monitored using headphones.
Wired headphones are attached to an audio source by a cable. The most common connectors are 6.35 mm (inch) and 3.5 mm phone connectors. The larger 6.35 mm connector is more common on fixed location home or professional equipment. The 3.5 mm connector remains the most widely used connector for portable application today. Adapters are available for converting between 6.35 mm and 3.5 mm devices.
As active component, wireless headphones tend to be costlier due to the necessity for internal hardware such as a battery, a charging controller, a speaker driver, and a wireless transceiver, whereas wired headphones are a passive component, outsourcing speaker driving to the audio source.
Some headphone cords are equipped with a serial potentiometer for volume control.
Wired headphones may be equipped with a non-detachable cable or a detachable auxiliary male-to-male plug, as well as some with two ports to allow connecting another wired headphone in a parallel circuit, which splits the audio signal to share with another participant, but can also be used to hear audio from two inputs simultaneously. An external audio splitter can retrofit this ability.
Applications for audiometric testing
Various types of specially designed headphones or earphones are also used to evaluate the status of the auditory system in the field of audiology for establishing hearing thresholds, medically diagnosing hearing loss, identifying other hearing related disease, and monitoring hearing status in occupational hearing conservation programs. Specific models of headphones have been adopted as the standard due to the ease of calibration and ability to compare results between testing facilities.
Supra-aural style headphones are historically the most commonly used in audiology as they are the easiest to calibrate and were considered the standard for many years. Commonly used models are the Telephonics Dynamic Headphone (TDH) 39, TDH-49, and TDH-50. In-the-ear or insert style earphones are used more commonly today as they provide higher levels of interaural attenuation, introduce less variability when testing 6,000 and 8,000 Hz, and avoid testing issues resulting from collapsed ear canals. A commonly used model of insert earphone is the Etymotic Research ER-3A. Circum-aural earphones are also used to establish hearing thresholds in the extended high frequency range (8,000 Hz to 20,000 kHz). Along with Etymotic Research ER-2A insert earphones, the Sennheiser HDA300 and Koss HV/1A circum-aural earphones are the only models that have reference equivalent threshold sound pressure level values for the extended high frequency range as described by ANSI standards.
Audiometers and headphones must be calibrated together. During the calibration process, the output signal from the audiometer to the headphones is measured with a sound level meter to ensure that the signal is accurate to the reading on the audiometer for sound pressure level and frequency. Calibration is done with the earphones in an acoustic coupler that is intended to mimic the transfer function of the outer ear. Because specific headphones are used in the initial audiometer calibration process, they cannot be replaced with any other set of headphones, even from the same make and model.
Electrical characteristics
Electrical characteristics of dynamic loudspeakers may be readily applied to headphones, because most headphones are small dynamic loudspeakers.
Impedance
Headphones are available with high or low impedance (typically measured at 1 kHz). Low-impedance headphones are in the range 16 to 32 ohms and high-impedance headphones are about 100–600 ohms. As the impedance of a pair of headphones increases, more voltage (at a given current) is required to drive it, and the loudness of the headphones for a given voltage decreases. In recent years, impedance of newer headphones has generally decreased to accommodate lower voltages available on battery powered CMOS-based portable electronics. This has resulted in headphones that can be more efficiently driven by battery-powered electronics. Consequently, newer amplifiers are based on designs with relatively low output impedance.
The impedance of headphones is of concern because of the output limitations of amplifiers. A modern pair of headphones is driven by an amplifier, with lower impedance headphones presenting a larger load. Amplifiers are not ideal; they also have some output impedance that limits the amount of power they can provide. To ensure an even frequency response, adequate damping factor, and undistorted sound, an amplifier should have an output impedance less than 1/8 that of the headphones it is driving (and ideally, as low as possible). If output impedance is large compared to the impedance of the headphones, significantly higher distortion is present. Therefore, lower impedance headphones tend to be louder and more efficient, but also demand a more capable amplifier. Higher impedance headphones are more tolerant of amplifier limitations, but produce less volume for a given output level.
Historically, many headphones had relatively high impedance, often over 500 ohms so they could operate well with high-impedance tube amplifiers. In contrast, modern transistor amplifiers can have very low output impedance, enabling lower-impedance headphones. This means that older audio amplifiers or stereos often produce poor-quality output on some modern, low-impedance headphones. In this case, an external headphone amplifier may be beneficial.
Sensitivity
Sensitivity is a measure of how effectively an earpiece converts an incoming electrical signal into an audible sound. It thus indicates how loud the headphones are for a given electrical drive level. It can be measured in decibels of sound pressure level per milliwatt (dB (SPL)/mW) or decibels of sound pressure level per volt (dB (SPL) / V). Both definitions are widely used, often interchangeably. As the output voltage (but not power) of a headphone amplifier is essentially constant for most common headphones, dB/mW is often more useful if converted into dB/V using Ohm's law:
Once the sensitivity per volt is known, the maximum volume for a pair of headphones can be easily calculated from the maximum amplifier output voltage. For example, for a headphone with a sensitivity of 100 dB (SPL)/V, an amplifier with an output of 1 root mean square (RMS) voltage produces a maximum volume of 100 dB.
Pairing high-sensitivity headphones with power amplifiers can produce dangerously high volumes and damage headphones. The maximum sound pressure level is a matter of preference, with some sources recommending no higher than 110 to 120 dB. In contrast, the American Occupational Safety and Health Administration recommends an average SPL of no more than 85 dB(A) to avoid long-term hearing loss, while the European Union standard EN 50332-1:2013 recommends that volumes above 85 dB(A) include a warning, with an absolute maximum volume (defined using 40–4,000 Hz noise) of no more than 100 dB to avoid accidental hearing damage. Using this standard, headphones with sensitivities of 90, 100 and 110 dB (SPL)/V should be driven by an amplifier capable of no more than 3.162, 1.0 and 0.3162 RMS volts at maximum volume setting, respectively to reduce the risk of hearing damage.
The sensitivity of headphones is usually between about 80 and 125 dB/mW and usually measured at 1 kHz.
Specifications
Headphone size can affect the balance between fidelity and portability. Generally, headphone form factors can be divided into four separate categories: circumaural (over-ear), supra-aural (on-ear), earbud and in-ear.
Connectivity
Wired
Wired headphones make a direct electrical connection to the source device using a cable, typically connected with a headphone jack.
Wireless
Modern wireless or cordless earphones have no cord connecting the two earphones to the source device or to each other; they receive audio by means of a wireless technology such as Bluetooth. In historical usage, 'wireless' referred to a connection to a radio receiver, which was known as a wireless.
On some models both audio streams are transmitted to one earphone which forwards one stream to the other earphone. On other models each earphone receives its audio stream directly from the source device. The former arrangement has the advantage of being compatible with legacy systems while the latter arrangement has the advantage of causing less power drain in the earphone that has to forward one audio stream.
Connection between the two earphones also being wireless may be referred to as true wireless stereo (TWS), offering longer battery life and complete transmission on left and right channels, avoiding possible source signal omission if only one is worn.
Ear adaption
Circumaural
Circumaural headphones (sometimes called full size headphones or over-ear headphones) have circular or ellipsoid earpads that encompass the ears. Because these headphones completely surround the ear, circumaural headphones can be designed to fully seal against the head to attenuate external noise. Because of their size, circumaural headphones can be heavy and there are some sets that weigh over . Ergonomic headband and earpad design is required to reduce discomfort resulting from weight. These are commonly used by drummers in recording.
Supra-aural
Supra-aural headphones or on-ear headphones have pads that press against the ears, rather than around them. They were commonly bundled with personal stereos during the 1980s. This type of headphone generally tends to be smaller and lighter than circumaural headphones, resulting in less attenuation of outside noise. Supra-aural headphones can also lead to discomfort due to the pressure on the ear as compared to circumaural headphones that sit around the ear. Comfort may vary due to the earcup material.
Ear-fitting headphones
Earphones
Earphones are very small headphones that are fitted directly in the outer ear, facing but not inserted in the ear canal. Earphones are portable and convenient, but many people consider them uncomfortable. They provide hardly any acoustic isolation and leave room for ambient noise to seep in; users may turn up the volume dangerously high to compensate, at the risk of causing hearing loss. On the other hand, they let the user be better aware of their surroundings. Since the early days of the transistor radio, earphones have commonly been bundled with personal music devices. They are sold at times with foam or rubber pads for comfort. (The use of the term earbuds, which has been around since at least 1984, did not hit its peak until after 2001, with the success of Apple's MP3 player.)
In-ear headphones
In-ear headphones, also known as in-ear monitors (IEMs) or canalphones, are small headphones with similar portability to earbuds that are inserted in the ear canal itself. IEMs are higher-quality in-ear headphones and are used by audio engineers and musicians as well as audiophiles.
The outer shells of in-ear headphones are made up of a variety of materials, such as plastic, aluminum, ceramic and other metal alloys. Because in-ear headphones engage the ear canal, they can be prone to sliding out, and they block out much environmental noise. Lack of sound from the environment can be a problem when sound is a necessary cue for safety or other reasons, as when walking, driving, or riding near or in vehicular traffic. Some in-ear headphones utilize built-in microphones to allow some outside sound to be heard when desired.
Generic or custom-fitting ear canal plugs are made from silicone rubber, elastomer, or foam. Such plugs in lower-end devices may be interchangeable, which increases the risk of them falling off and getting lodged in the ear canal. Custom in-ear headphones use castings of the ear canal to create custom-molded plugs that provide added comfort and noise isolation.
Some wireless earphones include a charging case.
Open- or closed-back
Both circumaural and supra-aural headphones can be further differentiated by the type of earcups:
Headphones having the back of the earcups open. This leaks more sound out of the headphone and also lets more ambient sounds into the headphone, but gives a more natural or speaker-like sound, due to including sounds from the environment.
They have a design that can be considered as a compromise between open-back headphones and closed-back headphones. Some believe the term "semi-open" is purely there for marketing purposes. There is no exact definition for the term semi-open headphone. Where the open-back approach has hardly any measure to block sound at the outer side of the diaphragm and the closed-back approach really has a closed chamber at the outer side of the diaphragm, a semi-open headphone can have a chamber to partially block sound while letting some sound through via openings or vents.
Closed-back (or sealed) styles have the back of the earcups closed. They usually block some of the ambient noise. Closed-back headphones usually can produce stronger low frequencies than open-back headphones.
Headset
A headset is a headphone combined with a microphone. Headsets provide the equivalent functionality of a telephone handset with hands-free operation. Among applications for headsets, besides telephone use, are aviation, theatre or television studio intercom systems, and console or PC gaming. Headsets are made with either a single-earpiece (mono) or a double-earpiece (mono to both ears or stereo). The microphone arm of headsets is either an external microphone type where the microphone is held in front of the user's mouth, or a voicetube type where the microphone is housed in the earpiece and speech reaches it by means of a hollow tube.
Telephone headsets
Telephone headsets connect to a fixed-line telephone system. A telephone headset functions by replacing the handset of a telephone. Headsets for standard corded telephones are fitted with a standard 4P4C commonly called an RJ-9 connector. Headsets are also available with 2.5 mm jack sockets for many DECT phones and other applications. Cordless bluetooth headsets are available, and often used with mobile telephones. Headsets are widely used for telephone-intensive jobs, in particular by call centre workers. They are also used by anyone wishing to hold telephone conversations with both hands free.
For older models of telephones, the headset microphone impedance is different from that of the original handset, requiring a telephone amplifier for the telephone headset. A telephone amplifier provides basic pin-alignment similar to a telephone headset adaptor, but it also offers sound amplification for the microphone as well as the loudspeakers. Most models of telephone amplifiers offer volume control for loudspeaker as well as microphone, mute function and switching between headset and handset. Telephone amplifiers are powered by batteries or AC adaptors.
Communication headsets
Communication headsets are used for two-way communication and typically consist of a headphone and attached microphone. Such headsets are used in a variety of professions as aviation, military, sports, music, and many service-oriented sectors. They come in all shapes and sizes, depending on use, required noise attenuation, and fidelity of communication needed.
Ambient noise reduction
Unwanted sound from the environment can be reduced by excluding sound from the ear by passive noise isolation, or, often in conjunction with isolation, by active noise cancellation.
Passive noise isolation is essentially using the body of the earphone, either over or in the ear, as a passive earplug that simply blocks out sound. The headphone types that provide most attenuation are in-ear canal headphones and closed-back headphones, both circumaural and supra aural. Open-back and earbud headphones provide some passive noise isolation, but much less than the others. Typical closed-back headphones block 8 to 12 dB, and in-ears anywhere from 10 to 15 dB. Some models have been specifically designed for drummers to facilitate the drummer monitoring the recorded sound while reducing sound directly from the drums as much as possible. Such headphones claim to reduce ambient noise by around 25 dB.
Active noise-cancelling headphones use a microphone, amplifier, and speaker to pick up, amplify, and play ambient noise in phase-reversed form; this to some extent cancels out unwanted noise from the environment without affecting the desired sound source, which is not picked up and reversed by the microphone. They require a power source, usually a battery, to drive their circuitry. Active noise cancelling headphones can attenuate ambient noise by 20 dB or more, but the active circuitry is mainly effective on constant sounds and at lower frequencies, rather than sharp sounds and voices. Some noise cancelling headphones are designed mainly to reduce low-frequency engine and travel noise in aircraft, trains, and automobiles, and are less effective in environments with other types of noise.
Transducer technology
Headphones use various types of transducer to convert electrical signals to sound.
Moving-coil
The moving coil driver, more commonly referred to as a "dynamic" driver is the most common type used in headphones. It consists of a stationary magnet element affixed to the frame of the headphone, which sets up a static magnetic field. The magnet in headphones is typically composed of ferrite or neodymium. A voice coil, a light coil of wire, is suspended in the magnetic field of the magnet, attached to a diaphragm, typically fabricated from lightweight, high-stiffness-to-mass-ratio cellulose, polymer, carbon material, paper or the like. When the varying current of an audio signal is passed through the coil, it creates a varying magnetic field that reacts against the static magnetic field, exerting a varying force on the coil causing it and the attached diaphragm to vibrate. The vibrating diaphragm pushes on the air to produce sound waves.
Electrostatic
Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated. Electrostatic headphones are usually more expensive than moving-coil ones, and are comparatively uncommon. In addition, a special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1,000 volts.
Due to the extremely thin and light diaphragm membrane, often only a few micrometers thick, and the complete absence of moving metalwork, the frequency response of electrostatic headphones usually extends well above the audible limit of approximately 20 kHz. The high-frequency response means that the low-midband distortion level is maintained to the top of the audible frequency band, which is generally not the case with moving coil drivers. Also, the frequency response peakiness regularly seen in the high-frequency region with moving coil drivers is absent. Well-designed electrostatic headphones can produce significantly better sound quality than other types.
Electrostatic headphones require a voltage source generating 100 V to over 1 kV, and are on the user's head. Since the invention of insulators, there is no actual danger. They do not need to deliver significant electric current, which further limits the electrical hazard to the wearer in case of fault.
Electret
An electret driver functions along the same electromechanical means as an electrostatic driver. However, the electret driver has a permanent charge built into it, whereas electrostatics have the charge applied to the driver by an external generator. Electret and electrostatic headphones are relatively uncommon. Original electrets were also typically cheaper and lower in technical capability and fidelity than electrostatics. Patent applications from 2009 to 2013 have been approved that show by using different materials, i.e. a "Fluorinated cyclic olefin electret film", Frequency response chart readings can reach 50 kHz at 100 db. When these new improved electrets are combined with a traditional dome headphone driver, headphones can be produced that are recognised by the Japan Audio Society as worthy of joining the Hi Res Audio program. US patents 8,559,660 B2. 7,732,547 B2.7,879,446 B2.7,498,699 B2.
Planar magnetic
Planar magnetic (also known as orthodynamic) headphones use similar technology to electrostatic headphones, with some fundamental differences. They operate similarly to planar magnetic loudspeakers.
A planar magnetic driver consists of a relatively large membrane that contains an embedded wire pattern. This membrane is suspended between two sets of permanent, oppositely aligned, magnets. A current passed through the wires embedded in the membrane produces a magnetic field that reacts with the field of the permanent magnets to induce movement in the membrane, which produces sound.
Balanced armature
A balanced armature is a sound transducer design primarily intended to increase the electrical efficiency of the element by eliminating the stress on the diaphragm characteristic of many other magnetic transducer systems. As shown schematically in the left diagram, it consists of a moving magnetic armature that is pivoted so it can move in the field of the permanent magnet. When precisely centered in the magnetic field there is no net force on the armature, hence the term 'balanced'. As illustrated in the right diagram, when there is electric current through the coil, it magnetizes the armature one way or the other, causing it to rotate slightly one way or the other about the pivot thus moving the diaphragm to make sound.
The design is not mechanically stable; a slight imbalance makes the armature stick to one pole of the magnet. A fairly stiff restoring force is required to hold the armature in the 'balance' position. Although this reduces its efficiency, this design can still produce more sound from less power than any other. Popularized in the 1920s as Baldwin Mica Diaphragm radio headphones, balanced armature transducers were refined during World War II for use in military sound powered telephones. Some of these achieved astonishing electro-acoustic conversion efficiencies, in the range of 20% to 40%, for narrow bandwidth voice signals.
Today they are typically used only in in-ear headphones and hearing aids, where their high efficiency and diminutive size is a major advantage. They generally are limited at the extremes of the hearing spectrum (e.g. below 20 Hz and above 16 kHz) and require a better seal than other types of drivers to deliver their full potential. Higher-end models may employ multiple armature drivers, dividing the frequency ranges between them using a passive crossover network. A few combine an armature driver with a small moving-coil driver for increased bass output.
The earliest loudspeakers for radio receivers used balanced armature drivers for their cones.
Thermoacoustic technology
The thermoacoustic effect generates sound from the audio frequency Joule heating of the conductor, an effect that is not magnetic and does not vibrate the speaker.
In 2013 a carbon nanotube thin-yarn earphone based on the thermoacoustic mechanism was demonstrated by a research group in Tsinghua University. The as-produced CNT thin yarn earphone has a working element called CNT thin yarn thermoacoustic chip. Such a chip is composed of a layer of CNT thin yarn array supported by the silicon wafer, and periodic grooves with certain depth are made on the wafer by micro-fabrication methods to suppress the heat leakage from the CNT yarn to the substrate.
Other transducer technologies
Transducer technologies employed much less commonly for headphones include the Heil Air Motion Transformer (AMT); Piezoelectric film; Ribbon planar magnetic; Magnetostriction and Plasma or Ionic. The first Heil AMT headphone was marketed by ESS Laboratories and was essentially an ESS AMT tweeter from one of the company's speakers being driven at full range. Since the turn of the century, only Precide of Switzerland have manufactured an AMT headphone. Piezoelectric film headphones were first developed by Pioneer, their two models used a flat sheet of film that limited the maximum volume of air movement. Currently, TakeT produces a piezoelectric film headphone shaped similarly to an AMT transducer but, which like the Precide driver, has a variation in the size of transducer folds over the diaphragm. It additionally incorporates a two way design by its inclusion of a dedicated tweeter/supertweeter panel. The folded shape of a diaphragm allows a transducer with a larger surface area to fit within smaller space constraints. This increases the total volume of air that can be moved on each excursion of the transducer given that radiating area.
Magnetostriction headphones, sometimes sold under the label Bonephones, work by vibrating against the side of head, transmitting sound via bone conduction. This is particularly helpful in situations where the ears must be unobstructed, or for people who are deaf for reasons that do not affect the nervous apparatus of hearing. Magnetostriction headphones though, are limited in their fidelity compared to conventional headphones that rely on the normal workings of the ear. Additionally, in the mid-1980s, a French company called Audio Reference tried to market the Plasmasonic plasma headphone invented by Henri Bondar. There are no known functioning examples left. Due to the small volume of air in a headphone, the plasma or ionic transducer can become a full range driver although the high temperatures and voltages needed makes them very rare.
Benefits and limitations
Headphones can prevent other people from hearing the sound, either for privacy or to prevent disturbing others, as in listening in a public library. They can also provide a level of sound fidelity greater than loudspeakers of similar cost. Part of their ability to do so comes from the lack of any need to perform room correction treatments with headphones. High-quality headphones can have an extremely flat low-frequency response down to 20 Hz within 3 dB. While a loudspeaker must use a relatively large (often 15" or 18") speaker driver to reproduce low frequencies, headphones can accurately reproduce bass and sub-bass frequencies with speaker drivers only 40-50 millimeters wide (or much smaller, as is the case with in-ear monitor headphones). Headphones' impressive low-frequency performance is possible because they are so much closer to the ear that they only need to move relatively small volumes of air.
Marketed claims such as 'frequency response 4 Hz to 20 kHz' are usually overstatements; the product's response at frequencies lower than 20 Hz is typically very small.
Headphones are also useful for video games that use 3D positional audio processing algorithms, as they allow players to better judge the position of an off-screen sound source (such as the footsteps of an opponent or their gunfire).
Although modern headphones have been particularly widely sold and used for listening to stereo recordings since the release of the Walkman, there is subjective debate regarding the nature of their reproduction of stereo sound. Stereo recordings represent the position of horizontal depth cues (stereo separation) via volume and phase differences of the sound in question between the two channels. When the sounds from two speakers mix, they create the phase difference the brain uses to locate direction. Through most headphones, because the right and left channels do not combine in this manner, the illusion of the phantom center can be perceived as lost. Hard panned sounds are also heard only in one ear rather than from one side.
Binaural recordings use a different microphone technique to encode direction directly as phase, with very little amplitude difference below 2 kHz, often using a dummy head. They can produce a surprisingly lifelike spatial impression through headphones. Commercial recordings almost always use stereo recording, rather than binaural, because loudspeaker listening is more common than headphone listening.
It is possible to change the spatial effects of stereo sound on headphones, to better approximate the presentation of speaker reproduction, by using frequency-dependent cross-feed between the channels.
Headsets can have ergonomic benefits over traditional telephone handsets. They allow call center agents to maintain better posture without needing to hand-hold a handset or tilt their head sideways to cradle it.
Health and safety
Dangers and risks
Using headphones at a sufficiently high volume level may cause temporary or permanent hearing impairment or deafness. The headphone volume often has to compete with the background noise, especially in loud places such as subway stations, aircraft, and large crowds. Extended periods of exposure to high sound pressure levels created by headphones at high volume settings may be damaging to hearing; Nearly 50% of teenagers and young adults (12 to 35 years old) in middle and high income countries listen to unsafe levels of sound on their personal audio devices and smartphones. However, one hearing expert found in 2012 (before the worldwide adoption of smartphones as the main personal listening devices) that "fewer than 5% of users select volume levels and listen frequently enough to risk hearing loss." The International Telecommunication Union recently published "Guidelines for safe listening devices/systems" recommended that sound exposure not exceed 80 decibels, A-weighted dB(A) for a maximum of 40 hours per week. The European Union have also set a similar limit for users of personal listening devices (80 dB(A) for no more than 40 hours per week) and for each additional increase of 3-dB in sound exposure, the duration should be cut in half (83 dB(A) for no more than 20 hours, 86 dB(A) for 10 hours per week, 89 dB(A) for 5 hours per week and so on. Most major manufactures of smartphones now include some safety or volume limiting features and warning messaging in their devices. though such practices have received mixed response from some segments of the buying who favor the personal choice of setting their own volume levels.
The usual way of limiting sound volume on devices driving headphones is by limiting output power. This has the additional undesirable effect of being dependent of the efficiency of the headphones; a device producing the maximum allowed power may not produce adequate volume when paired with low-efficiency, high-impedance equipment, while the same amount of power can reach dangerous levels with very efficient earphones.
Some studies have found that people are more likely to raise volumes to unsafe levels while performing strenuous exercise. A Finnish study recommended that exercisers should set their headphone volumes to half of their normal loudness and only use them for half an hour.
Other than hearing risk, there is a general danger that listening to loud music in headphones can distract the listener and lead to injury and accidents. Noise-cancelling headphones add extra risk. Several countries and states have made it illegal to wear headphones while driving or cycling.
There have also been numerous reports of contact dermatitis due to exposure to in-ear headphones such as Apple AirPods. The contact dermatitis would be caused by in-ear headphones that contain gold, rubber, dyes, acrylates, or methacrylates. However, there have been no studies done to prove that exposure to in-ear headphones will cause contact dermatitis, rather that there is a correlation between in-ear headphone use and contact dermatitis cases.
Occupational health and safety
Hearing risk from headphones' use also applies to workers who must wear electronic or communication headsets as part of their daily job (i.e., pilots, call center and dispatch operators, sound engineers , firefighters, etc.) and hearing damage depends on the exposure time. The National Institute for Occupational Safety and Health (NIOSH) recommends sound exposure not exceed 85 dB(A) over 8 hour work day as a time-weighted average. NIOSH uses the 3-dB exchange rate often referred to as "time-intensity tradeoff" which means if sound exposure level is increased by 3 decibels, the duration of exposure should be cut in half. NIOSH published several documents targeted at protecting the hearing of workers who must wear communication headsets such as call center operators, firefighters, and musicians and sound engineers.
| Technology | Media and communication: Basics | null |
191897 | https://en.wikipedia.org/wiki/Electric-field%20screening | Electric-field screening | In physics, screening is the damping of electric fields caused by the presence of mobile charge carriers. It is an important part of the behavior of charge-carrying mediums, such as ionized gases (classical plasmas), electrolytes, and electronic conductors (semiconductors, metals).
In a fluid, with a given permittivity , composed of electrically charged constituent particles, each pair of particles (with charges and ) interact through the Coulomb force as
where the vector is the relative position between the charges. This interaction complicates the theoretical treatment of the fluid. For example, a naive quantum mechanical calculation of the ground-state energy density yields infinity, which is unreasonable. The difficulty lies in the fact that even though the Coulomb force diminishes with distance as , the average number of particles at each distance is proportional to , assuming the fluid is fairly isotropic. As a result, a charge fluctuation at any one point has non-negligible effects at large distances.
In reality, these long-range effects are suppressed by the flow of particles in response to electric fields. This flow reduces the effective interaction between particles to a short-range "screened" Coulomb interaction. This system corresponds to the simplest example of a renormalized interaction.
In solid-state physics, especially for metals and semiconductors, the screening effect describes the electrostatic field and Coulomb potential of an ion inside the solid. Like the electric field of the nucleus is reduced inside an atom or ion due to the shielding effect, the electric fields of ions in conducting solids are further reduced by the cloud of conduction electrons.
Description
Consider a fluid composed of electrons moving in a uniform background of positive charge (one-component plasma). Each electron possesses a negative charge. According to Coulomb's interaction, negative charges repel each other. Consequently, this electron will repel other electrons creating a small region around itself in which there are fewer electrons. This region can be treated as a positively charged "screening hole". Viewed from a large distance, this screening hole has the effect of an overlaid positive charge which cancels the electric field produced by the electron. Only at short distances, inside the hole region, can the electron's field be detected. For a plasma, this effect can be made explicit by an -body calculation. If the background is made up of positive ions, their attraction by the electron of interest reinforces the above screening mechanism. In atomic physics, a germane effect exists for atoms with more than one electron shell: the shielding effect. In plasma physics, electric-field screening is also called Debye screening or shielding. It manifests itself on macroscopic scales by a sheath (Debye sheath) next to a material with which the plasma is in contact.
The screened potential determines the inter atomic force and the phonon dispersion relation in metals. The screened potential is used to calculate the electronic band structure of a large variety of materials, often in combination with pseudopotential models. The screening effect leads to the independent electron approximation, which explains the predictive power of introductory models of solids like the Drude model, the free electron model and the nearly free electron model.
Theory and models
The first theoretical treatment of electrostatic screening, due to Peter Debye and Erich Hückel, dealt with a stationary point charge embedded in a fluid.
Consider a fluid of electrons in a background of heavy, positively charged ions. For simplicity, we ignore the motion and spatial distribution of the ions, approximating them as a uniform background charge. This simplification is permissible since the electrons are lighter and more mobile than the ions, provided we consider distances much larger than the ionic separation. In condensed matter physics, this model is referred to as jellium.
Screened Coulomb interactions
Let ρ denote the number density of electrons, and φ the electric potential. At first, the electrons are evenly distributed so that there is zero net charge at every point. Therefore, φ is initially a constant as well.
We now introduce a fixed point charge Q at the origin. The associated charge density is Qδ(r), where δ(r) is the Dirac delta function. After the system has returned to equilibrium, let the change in the electron density and electric potential be Δρ(r) and Δφ(r) respectively. The charge density and electric potential are related by Poisson's equation, which gives
where ε0 is the vacuum permittivity.
To proceed, we must find a second independent equation relating Δρ and Δφ. We consider two possible approximations, under which the two quantities are proportional: the Debye–Hückel approximation, valid at high temperatures (e.g. classical plasmas), and the Thomas–Fermi approximation, valid at low temperatures (e.g. electrons in metals).
Debye–Hückel approximation
In the Debye–Hückel approximation, we maintain the system in thermodynamic equilibrium, at a temperature T high enough that the fluid particles obey Maxwell–Boltzmann statistics. At each point in space, the density of electrons with energy j has the form
where kB is the Boltzmann constant. Perturbing in φ and expanding the exponential to first order, we obtain
where
The associated length is called the Debye length. The Debye length is the fundamental length scale of a classical plasma.
Thomas–Fermi approximation
In the Thomas–Fermi approximation, named after Llewellyn Thomas and Enrico Fermi, the system is maintained at a constant electron chemical potential (Fermi level) and at low temperature. The former condition corresponds, in a real experiment, to keeping the metal/fluid in electrical contact with a fixed potential difference with ground. The chemical potential μ is, by definition, the energy of adding an extra electron to the fluid. This energy may be decomposed into a kinetic energy T part and the potential energy −eφ part. Since the chemical potential is kept constant,
If the temperature is extremely low, the behavior of the electrons comes close to the quantum mechanical model of a Fermi gas. We thus approximate T by the kinetic energy of an additional electron in the Fermi gas model, which is simply the Fermi energy EF. The Fermi energy for a 3D system is related to the density of electrons (including spin degeneracy) by
where kF is the Fermi wavevector. Perturbing to first order, we find that
Inserting this into the above equation for Δμ yields
where
is called the Thomas–Fermi screening wave vector.
This result follows from the equations of a Fermi gas, which is a model of non-interacting electrons, whereas the fluid, which we are studying, contains the Coulomb interaction. Therefore, the Thomas–Fermi approximation is only valid when the electron density is low, so that the particle interactions are relatively weak.
Result: Screened potential
Our results from the Debye–Hückel or Thomas–Fermi approximation may now be inserted into Poisson's equation. The result is
which is known as the screened Poisson equation. The solution is
which is called a screened Coulomb potential. It is a Coulomb potential multiplied by an exponential damping term, with the strength of the damping factor given by the magnitude of k0, the Debye or Thomas–Fermi wave vector. Note that this potential has the same form as the Yukawa potential. This screening yields a dielectric function .
Many-body theory
Classical physics and linear response
A mechanical -body approach provides together the derivation of screening effect and of Landau damping. It deals with a single realization of a one-component plasma whose electrons have a velocity dispersion (for a thermal plasma, there must be many particles in a Debye sphere, a volume whose radius is the Debye length). On using the linearized motion of the electrons in their own electric field, it yields an equation of the type
where is a linear operator, is a source term due to the particles, and is the Fourier-Laplace transform of the electrostatic potential. When substituting an integral over a smooth distribution function for the discrete sum over the particles in , one gets
where is the plasma permittivity, or dielectric function, classically obtained by a linearized Vlasov-Poisson equation, is the wave vector, is the frequency, and is the sum of source terms due to the particles.
By inverse Fourier-Laplace transform, the potential due to each particle is the sum of two parts One corresponds to the excitation of Langmuir waves by the particle, and the other one is its screened potential, as classically obtained by a linearized Vlasovian calculation involving a test particle. The screened potential is the above screened Coulomb potential for a thermal plasma and a thermal particle. For a faster particle, the potential is modified. Substituting an integral over a smooth distribution function for the discrete sum over the particles in , yields the Vlasovian expression enabling the calculation of Landau damping.
Quantum-mechanical approach
In real metals, the screening effect is more complex than described above in the Thomas–Fermi theory. The assumption that the charge carriers (electrons) can respond at any wavevector is just an approximation. However, it is not energetically possible for an electron within or on a Fermi surface to respond at wavevectors shorter than the Fermi wavevector. This constraint is related to the Gibbs phenomenon, where Fourier series for functions that vary rapidly in space are not good approximations unless a very large number of terms in the series are retained. In physics, this phenomenon is known as Friedel oscillations, and applies both to surface and bulk screening. In each case the net electric field does not fall off exponentially in space, but rather as an inverse power law multiplied by an oscillatory term. Theoretical calculations can be obtained from quantum hydrodynamics and density functional theory (DFT).
| Physical sciences | Electrostatics | Physics |
191933 | https://en.wikipedia.org/wiki/Exponential%20growth | Exponential growth | Exponential growth occurs when a quantity grows as an exponential function of time. The quantity grows at a rate directly proportional to its present size. For example, when it is 3 times as big as it is now, it will be growing 3 times as fast as it is now.
In more technical language, its instantaneous rate of change (that is, the derivative) of a quantity with respect to an independent variable is proportional to the quantity itself. Often the independent variable is time. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). Exponential growth is the inverse of logarithmic growth.
Not all cases of growth at an always increasing rate are instances of exponential growth. For example the function grows at an ever increasing rate, but is much slower than growing exponentially. For example, when it grows at 3 times its size, but when it grows at 30% of its size. If an exponentially growing function grows at a rate that is 3 times is present size, then it always grows at a rate that is 3 times its present size. When it is 10 times as big as it is now, it will grow 10 times as fast.
If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoing exponential decay instead. In the case of a discrete domain of definition with equal intervals, it is also called geometric growth or geometric decay since the function values form a geometric progression.
The formula for exponential growth of a variable at the growth rate , as time goes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is
where is the value of at time 0. The growth of a bacterial colony is often used to illustrate it. One bacterium splits itself into two, each of which splits itself resulting in four, then eight, 16, 32, and so on. The amount of increase keeps increasing because it is proportional to the ever-increasing number of bacteria. Growth like this is observed in real-life activity or phenomena, such as the spread of virus infection, the growth of debt due to compound interest, and the spread of viral videos. In real cases, initial exponential growth often does not last forever, instead slowing down eventually due to upper limits caused by external factors and turning into logistic growth.
Terms like "exponential growth" are sometimes incorrectly interpreted as "rapid growth". Indeed, something that grows exponentially can in fact be growing slowly at first.
Examples
Biology
The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted, so there is no more of that nutrient for more organisms to grow. Typically the first organism splits into two daughter organisms, who then each split to form four, who split to form eight, and so on. Because exponential growth indicates constant growth rate, it is frequently assumed that exponentially growing cells are at a steady-state. However, cells can grow exponentially at a constant rate while remodeling their metabolism and gene expression.
A virus (for example COVID-19, or smallpox) typically will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people.
Physics
Avalanche breakdown within a dielectric material. A free electron becomes sufficiently accelerated by an externally applied electrical field that it frees up additional electrons as it collides with atoms or molecules of the dielectric media. These secondary electrons also are accelerated, creating larger numbers of free electrons. The resulting exponential growth of electrons and ions may rapidly lead to complete dielectric breakdown of the material.
Nuclear chain reaction (the concept behind nuclear reactors and nuclear weapons). Each uranium nucleus that undergoes fission produces multiple neutrons, each of which can be absorbed by adjacent uranium atoms, causing them to fission in turn. If the probability of neutron absorption exceeds the probability of neutron escape (a function of the shape and mass of the uranium), the production rate of neutrons and induced uranium fissions increases exponentially, in an uncontrolled reaction. "Due to the exponential rate of increase, at any point in the chain reaction 99% of the energy will have been released in the last 4.6 generations. It is a reasonable approximation to think of the first 53 generations as a latency period leading up to the actual explosion, which only takes 3–4 generations."
Positive feedback within the linear range of electrical or electroacoustic amplification can result in the exponential growth of the amplified signal, although resonance effects may favor some component frequencies of the signal over others.
Economics
Economic growth is expressed in percentage terms, implying exponential growth.
Finance
Compound interest at a constant interest rate provides exponential growth of the capital. | Mathematics | Specific functions | null |
191936 | https://en.wikipedia.org/wiki/Oil%20lamp | Oil lamp | An oil lamp is a lamp used to produce light continuously for a period of time using an oil-based fuel source. The use of oil lamps began thousands of years ago and continues to this day, although their use is less common in modern times. They work in the same way as a candle but with fuel that is liquid at room temperature, so that a container for the oil is required. A textile wick drops down into the oil, and is lit at the end, burning the oil as it is drawn up the wick.
Oil lamps are a form of lighting, and were used as an alternative to candles before the use of electric lights. Starting in 1780, the Argand lamp quickly replaced other oil lamps still in their basic ancient form. These in turn were replaced by the kerosene lamp in about 1850. In small towns and rural areas the latter continued in use well into the 20th century, until such areas were finally electrified and light bulbs could be used.
Sources of fuel for oil lamps include a wide variety of plants such as nuts (walnuts, almonds and kukui) and seeds (sesame, olive, castor, or flax). Also widely used were animal fats (butter, ghee, fish oil, shark liver, whale blubber, or seal). Camphine, made of purified spirits of turpentine, and burning fluid, a mixture of turpentine and alcohol, were sold as lamp fuels starting in the 1830s as the whale oil industry declined. Burning fluid became more expensive during the Civil War when a federal tax on alcohol was reenacted. Sales of both camphene and burning fluid decreased in the late 1800s as other sources of lighting, such as kerosene made from petroleum, gas lighting and electric lighting, began to predominate.
Most modern lamps (such as fueled lanterns) have been replaced by gas-based or petroleum-based fuels to operate when emergency non-electric light is required. Oil lamps are currently used primarily for their ambience.
Components
The following are the main external parts of a terra-cotta lamp:
Shoulder
Pouring hole
The hole through which fuel is put inside the fuel chamber. The width generally ranges from . There may be one hole or multiple holes.
Wick hole and the nozzle
May be either an opening in the body of the lamp or an elongated nozzle. In some specific types of lamps, there is a groove on the top of the nozzle that runs along to the pouring hole to re-collect the oozing oil from the wick.
Handle
Lamps can come with a handle. The handle can come in different shapes. The most common is a ring-shaped for the forefinger surmounted by a palmette, on which the thumb is pressed to stabilize the lamp. Other handles can be crescent-shaped, triangular, or oval-shaped. The handleless lamps usually have an elongated nozzle, and sometimes have a lug rising diagonally from the periphery. The lug may act as a small handle where the thumb rests. Some lugs are pierced. It was speculated that pierced lugs were used to place a pen or straw, called the or , with which the wick was trimmed. Others think that the pierced lugs were used to hang the lamp on a metal hook when not in use.
Discus
Fuel chamber
The fuel reservoir. The mean volume in a typical terra-cotta lamp is .
Types
Lamps can be categorized based on different criteria, including material (clay, silver, bronze, gold, stone, slip), shape, structure, design, and imagery (e.g. symbolic, religious, mythological, erotic, battles, hunting).
Typologically, lamps of the Ancient Mediterranean can be divided into seven major categories:
Wheel-made This category includes Greek and Egyptian lamps that date before the 3rd century BC. They are characterized by simplicity, with little or no decoration, a wide pour-hole, a lack of handles, and a pierced or unpierced lug. Pierced lugs occurred briefly between the 4th and 3rd century BC. Unpierced lugs continued until the 1st century BC.
Volute, Early Imperial With spiral, scroll-like ornaments called volutes extending from their nozzles, these lamps were predominantly produced in Italy during the Early Roman period. They have a wide discus, a narrow shoulder, no handle, elaborate imagery and artistic finishing, and a wide range of patterns of decoration.
High Imperial These lamps are late Roman. The shoulder is wider and the discus is smaller with fewer decorations. These lamps have handles, short, plain nozzles, and less artistic finishing.
Frog This is a regional style lamp exclusively produced in Egypt and found in the regions around it, between and 300 AD. The frog (Heqet) is an Egyptian fertility symbol.
African Red Slip Lamps made in North Africa, but widely exported, decorated in a red slip. They date from the 2nd to the 7th century AD and comprise a wide variety of shapes including a flat, heavily decorated shoulder with a small and relatively shallow discus. Their decoration is either non-religious, Christian or Jewish. Grooves run from the nozzle back to the pouring hole. It is hypothesized that this is to take back spilled oil. These lamps often have more than one pour-hole.
Slipper These lamps are oval-shaped and found mainly in the Levant. They were produced between the 3rd to 9th centuries AD. Decorations include vine scrolls, palm wreaths, and Greek letters.
Factory lamps Also called , these are universal in distribution and simple in appearance. They have a channeled nozzle, plain discus, and two or three bumps on the shoulder. Initially made in factories in Northern Italy and Southern Gaul between the 1st and 3rd centuries AD, they were exported to all Roman provinces. The vast majority were stamped on the bottom to identify the manufacturer.
In religious contexts
Judaism
Lamps appear in the Torah and other Jewish sources as a symbol of "lighting" the way for the righteous, the wise, and for love and other positive values. While fire was often described as being destructive, light was given a positive spiritual meaning. The oil lamp and its light were important household items, and this may explain their symbolism. Oil lamps were used for many spiritual rituals.
The oil lamp and its light also became important ritualistic articles with the further development of Jewish culture and religion. The Temple Menorah, a ritual seven-branched oil lamp used in the Second Temple, forms the centre of the Chanukah story.
Christianity
There are several references to oil lamps in the New Testament. In the Eastern Orthodox Church, Roman Catholic Church, and Eastern Catholic Churches oil lamps (, ) are still used both on the Holy Table (altar) and to illuminate icons on the iconostasis and around the temple (church building). Orthodox Christians will also use oil lamps in their homes to illuminate their icon corner. Traditionally, the sanctuary lamp in an Orthodox church is an oil lamp. It is lit by the bishop when the church is consecrated, and ideally it should burn perpetually thereafter. The oil burned in all of these lamps is traditionally olive oil. Oil lamps are also referenced as a symbol throughout the New Testament, including in the Parable of the Ten Virgins.
Hinduism
Oil lamps are commonly used in Hindu temples as well as in home shrines. Generally the lamps used in temples are circular with places for five wicks. They are made of metal and either suspended on a chain or screwed onto a pedestal. There will usually be at least one lamp in each shrine, and the main shrine may contain several. Usually only one wick is lit, with all five burning only on festive occasions. The oil lamp is used in the Hindu ritual of Aarti.
In the home shrine, the style of lamp is usually different, containing only one wick. There is usually a piece of metal that forms the back of the lamp, which has a picture of a Hindu deity embossed on it. In many houses, the lamp burns all day, but in other homes, it is lit at sundown. The lamp in the home shrine is supposed to be lit before any other lights are turned on at night.
A hand-held oil lamp or incense sticks (lit from the lamp) are also used during the Hindu puja ceremony. In the North of India, a five-wick lamp is used, usually fueled with ghee. On special occasions, various other lamps may be used for puja, the most elaborate having several tiers of wicks.
In South India, there are a few types of oil lamps that are common in temples and traditional rituals. Some of the smaller ones are used for offerings as well.
A brass lamp with a depiction of goddess Sri Lakshmi over the back piece. They are usually small and have only one wick.
Nilavilakku A tall brass or bronze lamp on a stand where the wicks are placed at a certain height.
A brass or bronze lamp in the form of a lady holding a vessel with her hands. This type of lamp comes in different sizes, from very small to almost life-size. There are also large stone versions of this lamp in Hindu temples and shrines of Karnataka, Tamil Nadu and Kerala, especially at the base of columns and flanking the entrance of temples. They have only one wick.
A brass or bronze lamp hanging from a chain, often with multiple wicks.
Nachiarkoil lamp An ornamental brass lamp made of series of diyas, a handicraft product which is exclusively made by the Pather (Kammalar) community in Nachiyar Koil, Tamil Nadu, India.
Chinese
Oil lamps are lit at traditional Chinese shrines before either an image of a deity or a plaque with Classical Chinese characters giving the name of the deity. Such lamps are usually made from clear glass (giving them a similar appearance to normal drinking glasses) and are filled with oil, sometimes with water underneath. A cork or plastic floater containing a wick is placed on top of the oil with the bottom of the wick submerged in the oil.
Such lamps are kept burning in shrines, whether private or public, and incense sticks or joss sticks are lit from the lamp.
History
Curved stone lamps were found in places dated to the 10th millennium BC (Mesolithic, Middle Stone Age Period, c. 10,300–8000 BC). The oldest stone-oil lamp was found in Lascaux in 1940 in a cave that was inhabited 10,000 to 15,000 years ago.
Some archaeologists claim that the first shell-lamps existed more than 6,000 years ago (Neolithic, Later Stone Age, c. 8500–4500 BC). They believe that the alabaster shell-shaped lamps dug up in Sumerian sites dating to 2600 BC were imitations of real shell-lamps that had been used for a long time (Early Bronze Age, Canaanite/Bronze I–IV, c. 3300–2000 BC).
It is generally agreed that the evolution of handmade lamps moved from bowl-shaped to saucer-shaped, then from saucer with a nozzle, to a closed bowl with a spout.
Chalcolithic Age (4500–3300 BC)
The first manufactured red pottery oil lamps appeared in the Chalcolithic. These were of the round bowl type.
Bronze Age (3200–1200 BC)
Bronze Age lamps were simple wheel-made bowls with a slight pinch on four sides for the wick. Later lamps had only one pinch. These lamps vary in the shape of the rim, the general shape of the bowl and the shape of the base.
Intermediate Bronze Age (EBIV/MBI)
A design with four spouts for wicks appeared in the Intermediate Bronze Age (2300-2000 BC). Lamps are made from large bowls with flattened bases for stability, and four equally spaced shallow pinches in the rim for wicks, although some lamps with only a single pinch have also been found. The four-spout design evolved to provide sufficient light when fueled with fish or animal oils, which burn less efficiently than olive oil.
Middle Bronze Age lamps (MB)
The four-wick oil lamps persist into this period. However, most lamps now have only one wick. Early in this period the pinch is shallow, while later on it becomes more prominent and the mouth protrudes from the lamp's body. The bases are simple and flat. The crude potter's wheel is introduced, transforming the handmade bowls to a more uniform container. The saucer style evolves into a single spout shape.
Late Bronze Age lamps (LB)
A more pronounced, deeper single spout is developed, and it is almost closed on the sides. The shape is evolving to be more triangular, deeper and larger. All lamps are now wheel-made, with simple and usually flat bases.
Iron Age (1200–560 BC)
During the Iron Age, lamp rims become wider and flatter, with a deeper and higher spout. The tip of the spout is more upright in contrast to the rest of the rim. The lamps are becoming variable in shape and distribution, although some remain similar to lamps from the Late Bronze period. In addition, other forms evolve, such as small lamps with a flat base and larger lamps with a round base. The later form continues into the Iron Age II.
In the later Iron Age, variant forms appear. One common type is small, with a wide rim and a wide base. Another type is a small, shallow bowl with a thick and high discus base.
Arctic
The qulliq (seal-oil lamp) provided warmth and light in the harsh Arctic environment where there was no wood and where the sparse population relied almost entirely on seal oil. This lamp was the most important article of furniture for the Inuit, Yupik and other Inuit peoples.
The lamps were made of stone and their sizes and shapes of lamps could be different, but mostly were elliptical or half-moon shaped. The wicks were mostly made of dried moss or cottongrass and were lit along the edge of the lamp. A slab of seal blubber could be left to melt over the lamp feeding it with more fat.
Persian
Persian lamps were large, with thin sides and a deep pinch that flattens the mouth and makes it protrude outward.
Greek
Greek lamps are more closed to avoid spilling. They are smaller and more refined. Most are handle-less. Some are with a lug, which may be pierced or not pierced. The nozzle is elongated. The rim is folded over so it overlaps in order to make the nozzle, and is then pinched to make the wick hole.
They are round in shape and wheel-made.
Chinese
The earliest Chinese oil lamps are dated from the Warring States period (481–221 BC). The ancient Chinese created oil lamps with a refillable reservoir and a fibrous wick, giving the lamp a controlled flame. Lamps were constructed from jade, bronze, ceramic, wood, stone, and other materials. The largest oil lamp excavated so far is one discovered in a 4th-century tomb located in modern Pingshan, Hebei.
Early Roman
Production of oil lamps shifted to Italy as the main source of supply in the Early Roman era. Molds began to be used, and lamps were produced in large scale in factories. All lamps are closed in type. The lamp is produced in two parts, the upper part with the spout and the lower part with the fuel chamber. Most are of the characteristic "Imperial Type"—round, with nozzles of different forms (volute, semi-volute, U-shaped), a closed body, a central disk decorated with reliefs and a filling hole.
Late Roman
Late Roman lamps were of the "High Imperial" type. They included more decorations, and were produced locally or imported in large scale. The multiple-nozzled lamps appeared during this period. Many different varieties were created.
Frog type lamps also appeared during this period. These are kidney-shaped, heart-shaped or oval, and feature the motif of a frog or its abstraction, and sometimes geometrical motifs. They were produced around 100 AD. They are so variant that two identical lamps are seldom found.
Late Middle Age
Early Christian and late antique oil lamps were diverse. One of the most notable ones were Mediterranean sigillata (“African”) lamps. The motifs were largely geometric, vegetative and graphic (monograms), with figural depiction of animals and human figures, often Christ. Those depicting Christ or the Chi Rho often categorized as Hayes Type II.
Byzantine
Oil lanterns of the Byzantine were slipper-shaped and highly decorative. The multiple-nozzle design continued and most lamps bore handles. Some have complex exteriors.
Early Islamic
There is a transition period from Byzantine to Islamic lamps. The decoration on lamps of this transition period changed from crosses, animals, human likenesses, birds, or fish to plain linear, geometric, and raised-dot patterns.
The early Islamic lamps continued the traditions of Byzantine lamps. Decorations were initially a stylized form of a bird, grain, tree, plant, or flower. Later, they became entirely geometric or linear with raised dots.
An early description of the kerosene lamp comes from 9th-century Baghdad by al-Razi (Rhazes). He referred to it as the in his ('Book of Secrets').
In the transition period, some lamps had Arabic writing. Writing later disappears until the Mamluk period (13th to 15th century AD).
Industrial age
Oil burning carriage lamps provided a model for the first bicycle lamps in the 1860s.
Regional variations
Land of Israel/Palestine region
Jerusalem oil lamp: The clay has a characteristic black color because it was burned without oxygen. Usually of high quality.
Daroma oil lamp
Jerash oil lamp
Nabatean oil lamp
Herodian oil lamp: Considered to be used mainly by Jews. Wheel-made, rounded, and have a nozzle with concave sides. The lamps are usually not decorated; if there is decoration, it tends to be simple. Very common throughout all of Judea, and some lamps have also been found in Jordan. Date from the 1st century BC to the end of the 1st century AD.
Menorah oil lamp, seven nozzles: Rare and are associated with Judaism because of the numerical connection with the seven branches or arms of the Menorah.
Samaritan oil lamp: Characterized by a sealed filling hole, which was to be broken by the buyer. This was probably done to ensure ritual purity. They have a wider spout, and the concavities flanking the nozzle are almost always emphasized with a ladder pattern band. In general, the lamps are uncoated. The decorations are linear or geometric.
Type I: A distinct channel runs from the pouring hole to the nozzle. They have a small knob handle, a ladder pattern around the nozzle and no ornamentation on the bottom of the base.
Type II: Pear-shaped and elongated, with a lined channel that extends from the filling hole to the nozzle. Continued to be used up to the early Muslim period.
Candle Stick oil lamp: Menorah design on the nozzle and bunch of grapes on the shoulders.
Byzantine oil lamp: The upper parts and their handles are covered with braided patterns. All are made of a dark orange-red clay. A rounded bottom with a distinct X or cross appears inside the circled base.
Early Islamic oil lamp: Large knob handle and the channel above the nozzle are the dominant elements of these. The handle is tongue-shaped, and decoration is rich and elegant. The lower parts are extremely broad and the nozzles are pointed.
India
In Vedic times, fire was kept alive in every household in some form and carried with oneself while migrating to new locations. Later, the presence of fire in the household or a religious building was ensured by an oil lamp. Over the years various rituals and customs were woven around an oil lamp.
For , the gift of a lamp was and still is believed to be the best ('donation'). During marriages, spinsters of the household stand behind the bride and groom, holding an oil lamp to ward off evil. The presence of an oil lamp is an important aspect of ritual worship (the ) offered to a deity. Moreover, a day is kept aside for the worship of the lamp in the busy festival calendar, on one (moonless) day in the month of Shravan. This reverence for the deep is based on the symbolism of the journey from darkness and ignorance to light and the knowledge of the ultimate reality – "".
Earlier lamps were made out of stone or seashells. The shape was like a circular bowl with a protruding beak. Later, they were replaced by earthen and metal lamps. In the epics Ramayana and Mahabharata, there are references to gold and silver lamps as well. The simple shape evolved and the lamps were created in the shapes of the ('fish'), ('tortoise') and other incarnations of god Vishnu. Lamps were also created in the shape of the many emblems of gods, like conch shells or lotuses. Birds such as swans, peacocks, or parrots, and animals like snakes, lions, elephants and horses were also favorites when decorating a lamp. For lighting multiple lamps, wooden and stone ('towers of light') were created.
Erecting a in front of a temple is still a general practice in western and southern India. In some of the South Indian temples, raised brass lamp towers called can be seen. To adapt the design to households and smaller spaces, the ('tree of light') was created. As the name suggests, it is a metal lamp container with curvi-linear lines branching out from the base, each holding a lamp. The is another common design, where the goddess Lakshmi holds the lamp in her hands. is another typical lamp traditionally used for household purposes in South India.
Oil lamps also were included in proverbs. For example, a Bradj (pre-Hindi) proverb says, "", 'the [utmost] darkness is under the oil-lamp ()', meaning that what you seek could be close but unnoticed (right under your nose or feet), in various senses (a lamp's container casts a strong shadow).
Tax
When the Big Temple in Thanjavur, Tamil Nadu, was built 1010 AD, there were elaborate measures taken to provide lighting for the temple. Lands were donated to or conquered for the temple for this sole objective. The income from these lands would go towards providing the oil for the lights.
| Technology | Lighting | null |
191985 | https://en.wikipedia.org/wiki/Structural%20formula | Structural formula | The structural formula of a chemical compound is a graphic representation of the molecular structure (determined by structural chemistry methods), showing how the atoms are connected to one another. The chemical bonding within the molecule is also shown, either explicitly or implicitly. Unlike other chemical formula types, which have a limited number of symbols and are capable of only limited descriptive power, structural formulas provide a more complete geometric representation of the molecular structure. For example, many chemical compounds exist in different isomeric forms, which have different enantiomeric structures but the same molecular formula. There are multiple types of ways to draw these structural formulas such as: Lewis structures, condensed formulas, skeletal formulas, Newman projections, Cyclohexane conformations, Haworth projections, and Fischer projections.
Several systematic chemical naming formats, as in chemical databases, are used that are equivalent to, and as powerful as, geometric structures. These chemical nomenclature systems include SMILES, InChI and CML. These systematic chemical names can be converted to structural formulas and vice versa, but chemists nearly always describe a chemical reaction or synthesis using structural formulas rather than chemical names, because the structural formulas allow the chemist to visualize the molecules and the structural changes that occur in them during chemical reactions. ChemSketch and ChemDraw are popular downloads/websites that allow users to draw reactions and structural formulas, typically in the Lewis Structure style.
Structures in structural formulas
Bonds
Bonds are often shown as a line that connects one atom to another. One line indicates a single bond. Two lines indicate a double bond, and three lines indicate a triple bond. In some structures the atoms in between each bond are specified and shown. However, in some structures, the carbon molecules are not written out specifically. Instead, these carbons are indicated by a corner that forms when two lines connect. Additionally, Hydrogen atoms are implied and not usually drawn out. These can be inferred based on how many other atoms the carbon is attached to. For example, if Carbon A is attached to one other Carbon B, Carbon A will have three hydrogens in order to fill its octet.
Electrons
Electrons are usually shown as colored in circles. One circle indicates one electron. Two circles indicate a pair of electrons. Typically, a pair of electrons will also indicate a negative charge. By using the colored circles, the number of electrons in the valence shell of each respective atom is indicated providing further descriptive information regarding the reactive capacity of that atom in the molecule.
Charges
Oftentimes, atoms will have a positive or negative charge as their octet may not be complete. If the atom is missing a pair of electrons or has a proton, it will have a positive charge. If the atom has electrons that are not bonded to another atom, there will be a negative charge. In structural formulas, the positive charge is indicated by ⊕ , and the negative charge is indicated by ⊖ .
Stereochemistry (Skeletal formula)
Chirality in skeletal formulas is indicated by the Natta projection method. Stereochemistry is used to show the relative spatial arrangement of atoms in a molecule. Wedges are used to show this, and there are two types: dashed and filled. A filled wedge indicates that the atom is in the front of the molecule; it is pointing above the plane of the paper towards the front. A dashed wedge indicates that the atom is behind the molecule; it is pointing below the plane of the paper. When a straight, un-dashed line is used, the atom is in the plane of the paper. This spatial arrangement provides an idea of the molecule in a 3-dimensional space and there are constraints as to how the spatial arrangements can be arranged.
Unspecified stereochemistry
Wavy single bonds represent unknown or unspecified stereochemistry or a mixture of isomers. For example, the adjacent diagram shows the fructose molecule with a wavy bond to the HOCH2- group at the left. In this case the two possible ring structures are in chemical equilibrium with each other and also with the open-chain structure. The ring automatically opens and closes, sometimes closing with one stereochemistry and sometimes with the other.
Skeletal formulas can depict cis and trans isomers of alkenes. Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes, but is no longer considered an acceptable style for general use.
Lewis structures
Lewis structures (or "Lewis dot structures") are flat graphical formulas that show atom connectivity and lone pair or unpaired electrons, but not three-dimensional structure. This notation is mostly used for small molecules. Each line represents the two electrons of a single bond. Two or three parallel lines between pairs of atoms represent double or triple bonds, respectively. Alternatively, pairs of dots may be used to represent bonding pairs. In addition, all non-bonded electrons (paired or unpaired) and any formal charges on atoms are indicated. Through the use of Lewis structures, the placement of electrons, whether it is in a bond or in lone pairs, will allow for the identification of the formal charges of the atoms in the molecule to understand the stability and determine the most likely molecule (based on molecular geometry difference) that would be formed in a reaction. Lewis structures do give some thought to the geometry of the molecule as oftentimes, the bonds are drawn at certain angles to represent the molecule in real life. Lewis structure is best used to calculate formal charges or how atoms bond to each other as both electrons and bonds are shown. Lewis structures give an idea of the molecular and electronic geometry which varies based on the presence of bonds and lone pairs and through this one could determine the bond angles and hybridization as well.
Condensed formulas
In early organic-chemistry publications, where use of graphics was strongly limited, a typographic system arose to describe organic structures in a line of text. Although this system tends to be problematic in application to cyclic compounds, it remains a convenient way to represent simple structures:
CH3CH2OH (ethanol)
Parentheses are used to indicate multiple identical groups, indicating attachment to the nearest non-hydrogen atom on the left when appearing within a formula, or to the atom on the right when appearing at the start of a formula:
(CH3)2CHOH or CH(CH3)2OH (2-propanol)
In all cases, all atoms are shown, including hydrogen atoms. It is also helpful to show the carbonyls where the
C=O is implied through the O being placed in the parentheses. For example:
CH3C(O)CH3 (acetone)
Therefore, it is important to look to the left of the atom in the parentheses to make sure what atom it is attached to. This is helpful when converting from condensed formula to another form of structural formula such as skeletal formula or Lewis structures. There are different ways to show the various functional groups in the condensed formulas such as aldehyde as CHO, Carboxylic acids as CO2H or COOH, Esters as CO2R or COOR. However, the use of condensed formulas does not give an immediate idea of the molecular geometry of the compound or the number of bonds between the carbons, it needs to be recognized based on the number of atoms attached to the carbons and if there are any charges on the carbon.
Skeletal formulas
Skeletal formulas are the standard notation for more complex organic molecules. In this type of diagram, first used by the organic chemist Friedrich August Kekulé von Stradonitz, the carbon atoms are implied to be located at the vertices (corners) and ends of line segments rather than being indicated with the atomic symbol C. Hydrogen atoms attached to carbon atoms are not indicated: each carbon atom is understood to be associated with enough hydrogen atoms to give the carbon atom four bonds. The presence of a positive or negative charge at a carbon atom takes the place of one of the implied hydrogen atoms. Hydrogen atoms attached to atoms other than carbon must be written explicitly. An additional feature of skeletal formulas is that by adding certain structures the stereochemistry, that is the three-dimensional structure, of the compound can be determined. Often times, the skeletal formula can indicate stereochemistry through the use of wedges instead of lines. Solid wedges represent bonds pointing above the plane of the paper, whereas dashed wedges represent bonds pointing below the plane.
Perspective drawings
Newman projection and sawhorse projection
The Newman projection and the sawhorse projection are used to depict specific conformers or to distinguish vicinal stereochemistry. In both cases, two specific carbon atoms and their connecting bond are the center of attention. The only difference is a slightly different perspective: the Newman projection looking straight down the bond of interest, the sawhorse projection looking at the same bond but from a somewhat oblique vantage point. In the Newman projection, a circle is used to represent a plane perpendicular to the bond, distinguishing the substituents on the front carbon from the substituents on the back carbon. In the sawhorse projection, the front carbon is usually on the left and is always slightly lower. Sometimes, an arrow is used to indicate the front carbon. The sawhorse projection is very similar to a skeletal formula, and it can even use wedges instead of lines to indicate the stereochemistry of the molecule. The sawhorse projection is set apart from the skeletal formulas because the sawhorse projection is not a very good indicator of molecule geometry and molecular arrangement. Both a Newman and Sawhorse Projection can be used to create a Fischer Projection.
Cyclohexane conformations
Certain conformations of cyclohexane and other small-ring compounds can be shown using a standard convention. For example, the standard chair conformation of cyclohexane involves a perspective view from slightly above the average plane of the carbon atoms and indicates clearly which groups are axial (pointing vertically up or down) and which are equatorial (almost horizontal, slightly slanted up or down). Bonds in front may or may not be highlighted with stronger lines or wedges. The conformations progress as follows: chair to half-chair to twist-boat to boat to twist-boat to half-chair to chair. The cyclohexane conformations may also be used to show the potential energy present at each stage as shown in the diagram. The chair conformations (A) have the lowest energy, whereas the half-chair conformations (D) have the highest energy. There is a peak/local maximum at the boat conformation (C), and there are valleys/local minimums at the twist-boat conformations (B). In addition, cyclohexane conformations can be used to indicate if the molecule has any 1,3 diaxial-interactions which are steric interactions between axial substituents on the 1,3, and 5 carbons.
Haworth projection
The Haworth projection is used for cyclic sugars. Axial and equatorial positions are not distinguished; instead, substituents are positioned directly above or below the ring atom to which they are connected. Hydrogen substituents are typically omitted.
However, an important thing to keep in mind while reading an Haworth projection is that the ring structures are not flat. Therefore, Haworth does not provide 3-D shape. Sir Norman Haworth, was a British Chemist, who won a Nobel Prize for his work on Carbohydrates and discovering the structure of Vitamin C. During his discovery, he also deducted different structural formulas which are now referred to as Haworth Projections. In a Haworth Projection a pyranose sugar is depicted as a hexagon and a furanose sugar is depicted as a pentagon. Usually an oxygen is placed at the upper right corner in pyranose and in the upper center in a furanose sugar. The thinner bonds at the top of the ring refer to the bonds as being farther away and the thicker bonds at the bottom of the ring refer to the end of the ring that is closer to the viewer.
Fischer projection
The Fischer projection is mostly used for linear monosaccharides. At any given carbon center, vertical bond lines are equivalent to stereochemical hashed markings, directed away from the observer, while horizontal lines are equivalent to wedges, pointing toward the observer. The projection is unrealistic, as a saccharide would never adopt this multiply eclipsed conformation. Nonetheless, the Fischer projection is a simple way of depicting multiple sequential stereocenters that does not require or imply any knowledge of actual conformation. A Fischer projection will restrict a 3-D molecule to 2-D, and therefore, there are limitations to changing the configuration of the chiral centers. Fischer projections are used to determine the R and S configuration on a chiral carbon and it is done using the Cahn Ingold Prelog rules. It is a convenient way to represent and distinguish between enantiomers and diastereomers.
Limitations
A structural formula is a simplified model that cannot represent certain aspects of chemical structures. For example, formalized bonding may not be applicable to dynamic systems such as delocalized bonds. Aromaticity is such a case and relies on convention to represent the bonding. Different styles of structural formulas may represent aromaticity in different ways, leading to different depictions of the same chemical compound. Another example is formal double bonds where the electron density is spread outside the formal bond, leading to partial double bond character and slow inter-conversion at room temperature. For all dynamic effects, temperature will affect the inter-conversion rates and may change how the structure should be represented. There is no explicit temperature associated with a structural formula, although many assume that it would be standard temperature.
| Physical sciences | Substance | Chemistry |
192013 | https://en.wikipedia.org/wiki/Saw | Saw | A saw is a tool consisting of a tough blade, wire, or chain with a hard toothed edge used to cut through material. Various terms are used to describe toothed and abrasive saws.
Saws began as serrated materials, and when mankind learned how to use iron, it became the preferred material for saw blades of all kinds. There are numerous types of hand saws and mechanical saws, and different types of blades and cuts.
Description
A saw is a tool consisting of a tough blade, wire, or chain with a hard toothed edge. It is used to cut through material, very often wood, though sometimes metal or stone.
Terminology
A number of terms are used to describe saws.
Kerf
The narrow channel left behind by the saw and (relatedly) the measure of its width is known as the kerf. As such, it also refers to the wasted material that is turned into sawdust, and becomes a factor in measurements when making cuts. For example, cutting an 8 foot (2.4 meter) piece of wood into 1 foot (30 cm) sections, with 1/8 inch (3 mm) kerf will produce only seven sections, plus one that is 7/8 inch (21 mm) too short when factoring in the kerf from all the cuts. The kerf depends on several factors: the width of the saw blade; the set of the blade's teeth; the amount of wobble created during cutting; and the amount of material pulled out of the sides of the cut. Although the term "kerf" is often used informally, to refer simply to the thickness of the saw blade, or to the width of the set, this can be misleading, because blades with the same thickness and set may create different kerfs. For example, a too-thin blade can cause excessive wobble, creating a wider-than-expected kerf. The kerf created by a given blade can be changed by adjusting the set of its teeth with a tool called a saw tooth setter. The kerf left behind by a laser beam can be changed based on the laser's power and type of material being cut.
Toothed saws
A toothed saw or tooth saw has a hard toothed edge. The cut is made by placing the toothed edge against the material and moving it back and forth, or continuously forward. This force may be applied by hand, or powered by steam, water, electricity or other power source.
Frequency of teeth
The most common measurement of the frequency of teeth on a saw blade is point per inch (25 mm). It is taken by setting the tip (or point) of one tooth at the zero point on a ruler, and then counting the number of points between the zero mark and the one-inch mark, inclusive (that is, including both the point at the zero mark and any point that lines up precisely with the one-inch mark). There is always one more point per inch than there are teeth per inch (e.g., a saw with 14 points per inch will have 13 teeth per inch, and a saw with 10 points per inch will have 9 teeth per inch). Some saws do not have the same number of teeth per inch throughout their entire length, but the vast majority do. Those with more teeth per inch at the toe are described as having incremental teeth, in order to make starting the saw cut easier.
An alternative measurement of the frequency of teeth on a saw blade is teeth per inch. Usually abbreviated TPI, as in, "a blade consisting of 18TPI." (cf. points per inch.)
Set
Set is the degree to which the teeth are bent out sideways away from the blade, usually in both directions. In most modern serrated saws, the teeth are set, so that the kerf (the width of the cut) will be wider than the blade itself. This allows the blade to move through the cut easily without binding (getting stuck). The set may be different depending on the kind of cut the saw is intended to make. For example, a ripsaw has a tooth set that is similar to the angle used on a chisel, so that it rips or tears the material apart. A "flush-cutting saw" has no set on one side, so that the saw can be laid flat on a surface and cut along that surface without scratching it. The set of the blade's teeth can be adjusted with a tool called a saw set.
Other toothed saw terms
Back: The edge opposite the toothed edge.
Fleam: The angle of the faces of the teeth relative to a line perpendicular to the face of the saw.
Gullet: The valley between the points of the teeth.
Heel: The end closest to the handle.
Rake: The angle of the front face of the tooth relative to a line perpendicular to the length of the saw. Teeth designed to cut with the grain (ripping) are generally steeper than teeth designed to cut across the grain (crosscutting)
Teeth: Sharp protrusions along the cutting side of the saw.
Toe: The end farthest from the handle.
Toothed edge: the edge with the teeth (on some saws both edges are toothed).
Web: a narrow saw blade held in a frame, worked either by hand or in a machine, sometimes with teeth on both edges
Abrasive saws
An abrasive saw has a powered circular blade designed to cut through metal or ceramic.
History
Saws were at first serrated materials such as flint, obsidian, sea shells and shark teeth.
Serrated tools with indications that they were used to cut wood were found at Pech-de-l'Azé caveIV in France. These tools date to 90,000-30,000 years BCE.
In ancient Egypt, open (unframed) pull saws made of copper are documented as early as the Early Dynastic Period, –2,686 BC. Many copper saws were found in tomb No. 3471 dating to the reign of Djer in the 31st century BC. Saws were used for cutting a variety of materials, including humans (death by sawing), and models of saws were used in many contexts throughout Egyptian history. Particularly useful are tomb wall illustrations of carpenters at work that show the sizes and use of different types of saws. Egyptian saws were at first serrated, hardened copper which may have cut on both pull and push strokes. As the saw developed, teeth were raked to cut only on the pull stroke and set with the teeth projecting only on one side, rather than in the modern fashion with an alternating set. Saws were also made of bronze and later iron. In the Iron Age, frame saws were developed holding the thin blades in tension. The earliest known sawmill is the Roman Hierapolis sawmill from the third century AD and was for sawing stone.
According to Chinese legend, the saw was invented by Lu Ban. In Greek mythology, as recounted by Ovid, Talos, the nephew of Daedalus, invented the saw. In archeological reality, saws date back to prehistory and most probably evolved from Neolithic stone or bone tools. "[T]he identities of the axe, adz, chisel, and saw were clearly established more than 4,000 years ago."
Manufacture of saws by hand
Once mankind had learned how to use iron, it became the preferred material for saw blades of all kinds; some cultures learned how to harden the surface ("case hardening" or "steeling"), prolonging the blade's life and sharpness.
Steel, made of iron with moderate carbon content and hardened by quenching hot steel in water, was used as early as 1200 BC. By the end of the 17th century European manufacture centred on Germany, (the Bergisches Land) in London, and the Midlands of England. Most blades were made of steel (iron carbonised and re-forged by different methods). In the mid 18th century a superior form of completely melted steel ("crucible cast") began to be made in Sheffield, England, and this rapidly became the preferred material, due to its hardness, ductility, springiness and ability to take a fine polish. A small saw industry survived in London and Birmingham, but by the 1820s the industry was growing rapidly and increasingly concentrated in Sheffield, which remained the largest centre of production, with over 50% of the nation's saw makers. The US industry began to overtake it in the last decades of the century, due to superior mechanisation, better marketing, a large domestic market, and the imposition of high tariffs on imports. Highly productive industries continued in Germany and France.
Early European saws were made from a heated sheet of iron or steel, produced by flattening by several men simultaneously hammering on an anvil. After cooling, the teeth were punched out one at a time with a die, the size varying with the size of the saw. The teeth were sharpened with a triangular file of appropriate size, and set with a hammer or a wrest. By the mid 18th century rolling the metal was usual, the power for the rolls being supplied first by water, and increasingly by the early 19th century by steam engines. The industry gradually mechanized all the processes, including the important grinding the saw plate "thin to the back" by a fraction of an inch, which helped the saw to pass through the kerf without binding. The use of steel added the need to harden and temper the saw plate, to grind it flat, to smith it by hand hammering and ensure the springiness and resistance to bending deformity, and finally to polish it.
Most hand saws are today entirely made without human intervention, with the steel plate supplied ready rolled to thickness and tensioned before being cut to shape by laser. The teeth are shaped and sharpened by grinding and are flame hardened to obviate (and actually prevent) sharpening once they have become blunt. A large measure of hand finishing remains to this day for quality saws by the very few specialist makers reproducing the 19th century designs.
Pit saws
A pit saw was a two-man ripsaw. In parts of early colonial North America, it was one of the principal tools used in shipyards and other industries where water-powered sawmills were not available. It was so-named because it was typically operated over a saw pit, either at ground level or on trestles across which logs that were to be cut into boards. The pit saw was "a strong steel cutting-plate, of great breadth, with large teeth, highly polished and thoroughly wrought, some eight or ten feet in length" with either a handle on each end or a frame saw. A pit-saw was also sometimes known as a whipsaw. It took 2-4 people to operate. A "pit-man" stood in the pit, a "top-man" stood outside the pit, and they worked together to make cuts, guide the saw, and raise it. Pit-saw workers were among the most highly paid laborers in early colonial North America.
Types of saws
Hand saws
Hand saws typically have a relatively thick blade to make them stiff enough to cut through material. (The pull stroke also reduces the amount of stiffness required.) Thin-bladed handsaws are made stiff enough either by holding them in tension in a frame, or by backing them with a folded strip of steel (formerly iron) or brass (on account of which the latter are called "back saws.") Some examples of hand saws are:
Artillery saw, Chain saw, Portable link saw: a flexible chain saw up to 122 cm (four feet) long, supplied to the military for clearing tree branches for gun sighting;
Butcher's saw: for cutting bone; many different designs were common, including a large one for two men, known in the USA as a beef-splitter; most were frame saws, some backsaws;
Crosscut saw: for cutting wood perpendicular to the grain;
Docking saw: a large, heavy saw with an unbreakable metal handle of unique pattern, used for rough work
Farmer's/Miner's saw: a strong saw with coarse teeth;
Felloe saw;: the narrowest-bladed variety of pit saw, up to 213 cm (7 feet) long and able to work the sharp curves of cart wheel felloes; a slightly wider blade, equally long, was called a stave saw, for cutting the staves for wooden casks;
Floorboard/flooring saw: a small saw, rarely with a back, and usually with the teeth continued onto the back at the toe for a short distance; used by house carpenters for cutting across a floor board without damaging its neighbour;
Grafting/grafter/table saw; a hand saw with a tapering narrow blade from 15 to 76 cm (6 to 30 inches) long; the origins of the terms are obscure
Ice saw: either of pit saw design without a bottom tiller, or a large handsaw, always with very coarse teeth, for harvesting ice to be used away from source, or stored for use in warmer weather;
Japanese saw or pull saw: a thin-bladed saw that cuts on the pull stroke, and with teeth of different design to European or American traditional forms;
Keyhole saw or compass saw: a narrow-bladed saw, sharply tapered thin to the back to cut round curves, with one end fixed in a handle;
Musical saw, a hand saw, possibly with the teeth filed off, used as a musical instrument.
Nest of saws: three or four interchangeable blades fitted to a handle with screws or quick-release nuts;
One-man cross cut saw: a coarse-toothed saw of 76 to 152 cm (30-60 inches) length for rough or green timber; a second, turned, handle could be added at the heel or the toe for a second operator;
Pad saw: a short narrow blade held in a wooden or metal handle (the pad);
Panel saw: a lighter variety of handsaw, usually less than 61 cm (24 inches) long and having finer teeth;
Plywood saw: a fine-toothed saw (to reduce tearing), for cutting plywood
Polesaw: a saw blade attached to a long handle
Pruning saw: the commonest variety has a 30-71 cm (12-28 inch) blade, toothed on both edges, one tooth pattern being considerably coarser than the other;
Ripsaw: for cutting wood along the grain;
Rule saw or combination saw: a handsaw with a measuring scale along the back and a handle making a 90° square with the scaled edge;
Salt saw: a short hand saw with a non-corroding zinc or copper blade, used for cutting a block of salt at a time when it was supplied to large kitchens in that form;
Turkish saw or monkey saw: a small saw with a parallel-sided blade, designed to cut on the pull stroke;
Two-man saw: a general term for a large crosscut saw or ripsaw for cutting large logs or trees;
Veneer saw: a two-edged saw with fine teeth for cutting veneer;
Wire saw: a toothed or coarse cable or wire wrapped around the material and pulled back and forth.
Back saws
"Back saws" which have a thin blade backed with steel or brass to maintain rigidity, are a subset of hand saws. Back saws have different names depending on the length of the blade; "tenon saw" (from use in making mortise and tenon joints) is often used as a generic name for all the sizes of woodworking backsaw. Some examples are:
Bead saw/gent's saw/jeweller's saw: a small backsaw with a turned wooden handle;
Blitz saw: a small backsaw, for cutting wood or metal, with a hook at the toe for the thumb of the non-dominant hand;
Carcase saw: a term used until the 20th century for backsaws with long blades;
Dovetail saw: a backsaw with a blade of length, for cutting intricate joints in cabinet making work;
Electrician's saw: a very small backsaw used in the early 20th century on the wooden capping and casing in which electric wiring was run;
Flush-cutting saw/offset saw: a backsaw with a flat side and a handle offset toward the opposite side, usually reversible, for cutting flush to a surface such as a floor;
Mitre-box saw: a saw with a blade long, held in an adjustable frame (the mitre box) for making accurate crosscuts and mitres in a workplace;
Sash saw: a backsaw of blade length .
Frame saws
A class of saws for cutting all types of material; they may be small or large and the frame may be wood or metal.
Bow saw, turning saw, or buck saw: a saw with a narrow blade held in tension in a frame; the blade can usually be rotated and may be toothed on both edges; it may be a rip or a crosscut, and was the preferred form of hand saw for continental European woodworkers until superseded by machines;
Coping saw: a saw with a very narrow blade held in a metal frame in which it can usually be rotated, for cutting wood patterns;
Felloe saw; a pit saw with a narrow tapering blade for sawing out the felloes of wooden cart wheels
Fretsaw: a saw with a very narrow blade which can be rotated, held in a deep metal frame, for cutting intricate wood patterns such as jigsaw puzzles;
Girder saw: a large hack saw with a deep frame;
Hacksaw/bow saw for iron: a fine-toothed blade held in a frame, for cutting metal and other hard materials;
Pit saw/sash saw/whip saw: large wooden-framed saws for converting timber to lumber, with blades of various widths and lengths up to 305 cm (10 feet); the timber is supported over a pit or raised on trestles; other designs are open-bladed;
Stave saw: a narrow tapering-bladed pit saw for sawing out staves for wooden casks;
Surgeon's/surgical saw/Bone cutter: for cutting bone during surgical procedures; some designs are framed, others have an open blade with a characteristic shape of the toe.
Mechanically powered saws
Circular-blade saws
Circular saw: a saw with a circular blade which spins. Circular saws can be large for use in a mill or hand held up to 24" blades and different designs cut almost any kind of material including wood, stone, brick, plastic, etc.
Table saw: a saw with a circular blade rising through a slot in a table. If it has a direct-drive blade small enough to set on a workbench, it is called a "workbench" or "jobsite" saw. If set on steel legs, it is called a "contractor's saw." A heavier, more precise and powerful version, driven by several belts, with an enclosed base stand, is called a "cabinet saw." A newer version, combining the lighter-weight mechanism of a contractor's saw with the enclosed base stand of a cabinet saw, is called a "hybrid saw."
Radial arm saw: a versatile machine, mainly for cross-cutting. The blade is pulled on a guide arm through a piece of wood that is held stationary on the saw's table.
Rotary saw or "spiral-cut saw" or "RotoZip": for making accurate cuts, without using a pilot hole, in wallboard, plywood, and other thin materials.
Electric miter saw or "chop saw," or "cut-off saw" or "power miter box": for making accurate cross cuts and miter cuts. The basic version has a circular blade fixed at a 90° angle to the vertical. A "compound miter saw" has a blade that can be adjusted to other angles. A "sliding compound miter saw" has a blade that can be pulled through the work, in an action similar to that of a radial-arm saw, which provides more capacity for cutting wider workpieces.
Concrete saw: (usually powered by an internal combustion engine and fitted with a Diamond Blade) for cutting concrete or asphalt pavement.
Pendulum saw or "swing saw": a saw hung on a swinging arm, for the rough cross cutting of wood in a sawmill and for cutting ice out of a frozen river.
Abrasive saw: a circular or reciprocating saw-like tool with an abrasive disc rather than a toothed blade, commonly used for cutting very hard materials. As it does not have regularly shaped edges the abrasive saw is not a saw in technical terms.
Hole saw: ring-shaped saw to attach to a power drill, used for cutting a circular hole in material.
Reciprocating blade saws
Dragsaw: for bucking logs (used before the invention of the chainsaw).
Frame saw or sash saw: A thin bladed rip-saw held in tension by a frame used both manually and in sawmills. Some whipsaws are frame saws and some have a heavy blade which does not need a frame called a mulay or muley saw.
Ice saw: for ice cutting. Looks like a mulay saw but sharpened as a cross-cut saw.
Jigsaw or "saber saw" (US): narrow-bladed saw, for cutting irregular shapes. (Also an old term for what is now more commonly called a "scroll saw.")
Power hacksaw or electric hacksaw: a saw for cutting metal, with a frame like a normal hacksaw.
Reciprocating saw or "sabre saw" (UK and Australia): a saw with an "in-and-out" or "up-and-down" action similar to a jigsaw, but larger and more powerful, and using a longer stroke with the blade parallel to the barrel. Hand-held versions, sometimes powered by compressed air, are for demolition work or for cutting pipe.
Scroll saw: for making intricate curved cuts ("scrolls").
Sternal saw: for cutting through a patient's sternum during surgery.
Continuous band
Band saw: a ripsaw on a motor-driven continuous band. Portable sawmills are typically band saw mills.
Chainsaws
Chainsaw: an engine-driven saw with teeth on a chain normally used as a cross-cut saw.
Chainsaw mill: a chainsaw with a special saw chain and guide system for use as a rip-saw.
Types of blades and blade cuts
Most blade teeth are made either of tool steel or carbide. Carbide is harder and holds a sharp edge much longer.
Band saw blade A long band welded into a circle, with teeth on one side. Compared to a circular-saw blade, it produces less waste because it is thinner, dissipates heat better because it is longer (so there is more blade to do the cutting, and is usually run at a slower speed.
Crosscut In woodworking, a cut made at (or close to) a right angle to the direction of the wood grain of the workpiece. A crosscut saw is used to make this type of cut.
Rip cut In woodworking, a cut made parallel to the direction of the grain of the workpiece. A ripsaw is used to make this type of cut.
Plytooth blade A circular saw blade with many small teeth, designed for cutting plywood with minimal splintering.
Dado blade A special type of circular saw blade used for making wide-grooved cuts in wood so that the edge of another piece of wood will fit into the groove to make a joint. Some dado blades can be adjusted to make different-width grooves. A "stacked" dado blade, consisting of chipper blades between two dado blades, can make different-width grooves by adding or removing chipper blades. An "adjustable" dado blade has a movable locking cam mechanism to adjust the degree to which the blade wobbles sideways, allowing continuously variable groove widths from the lower to upper design limits of the dado.
Strobe saw blade A circular saw blade with special rakers/cutters to easily saw through green or uncured wood that tends to jam other kinds of saw blades.
Materials used for saws
There are several materials used in saws, with each of its own specifications.
Brass Used only for the reinforcing folded strip along the back of backsaws, and to make the screws that in earlier times held the blade to the handle.
Iron Used for blades and for the reinforcing strip on cheaper backsaws until superseded by steel.
Zinc Used only for saws made to cut blocks of salt, as formerly used in kitchens
Copper Used as an alternative to zinc for salt-cutting saws
Steel Used in almost every existing kind of saw. Because steel is cheap, easy to shape, and very strong, it has the right properties for most kind of saws.
Diamond Fixed onto the saw blade's base to form diamond saw blades. As diamond is a superhard material, diamond saw blades can be used to cut hard brittle or abrasive materials, for example, stone, concrete, asphalt, bricks, ceramics, glass, semiconductor and gem stone. There are many methods used to fix the diamonds onto the blades' base and there are various kinds of diamond saw blades for different purposes.
High-speed steel (HSS) The whole saw blade is made of High-Speed Steel (HSS). HSS saw blades are mainly used to cut steel, copper, aluminum and other metal materials. If high-strength steels (e.g., stainless steel) are to be cut, the blades made of cobalt HSS (e.g. M35, M42) should be used.
Tungsten carbide Normally, there are two ways to use tungsten carbide to make saw blades:
Carbide-tipped saw blades The saw blade's teeth are tipped (via welding) with small pieces of sharp tungsten carbide block. This type of blade is also called TCT (Tungsten Carbide-Tipped) saw blade. Carbide-tipped saw blades are widely used to cut wood, plywood, laminated board, plastic, glass, aluminum and some other metals.
Solid-carbide saw blades The whole saw blade is made of tungsten carbide. Comparing with HSS saw blades, solid-carbide saw blades have higher hardness under high temperatures, and are more durable, but they also have a lower toughness.
Uses
Saws are commonly used for cutting hard materials. They are used extensively in forestry, construction, demolition, medicine, and hunting.
Musical saws are used as instruments to make music.
Chainsaw carving is a flourishing modern art form. Special saws have been developed for the purpose.
The production of lumber, lengths of squared wood for use in construction, begins with the felling of trees and the transportation of the logs to a sawmill.
Plainsawing: Lumber that will be used in structures is typically plainsawn (also called flatsawn), a method of dividing the log that produces the maximum yield of useful pieces and therefore the greatest economy.
Quarter sawing: This sawing method produces edge-grain or vertical grain lumber, in which annual growth rings run more consistently perpendicular to the pieces' wider faces.
| Technology | Hand tools | null |
192184 | https://en.wikipedia.org/wiki/Pica%20%28disorder%29 | Pica (disorder) | Pica is the craving or consumption of objects that are not normally intended to be consumed. It is classified as an eating disorder but can also be the result of an existing mental disorder. The ingested or craved substance may be biological, natural or manmade. The term was drawn directly from the medieval Latin word for magpie, a bird subject to much folklore regarding its opportunistic feeding behaviors.
According to the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5), pica as a standalone eating disorder must persist for more than one month at an age when eating such objects is considered developmentally inappropriate, not part of culturally sanctioned practice, and sufficiently severe to warrant clinical attention. Pica may lead to intoxication in children, which can result in an impairment of both physical and mental development. In addition, it can cause surgical emergencies to address intestinal obstructions, as well as more subtle symptoms such as nutritional deficiencies and parasitosis. Pica has been linked to other mental disorders. Stressors such as psychological trauma, maternal deprivation, family issues, parental neglect, pregnancy, and a disorganized family structure are risk factors for pica.
Pica is most commonly seen in pregnant women, small children, and people who may have developmental disabilities such as autism. Children eating painted plaster containing lead may develop brain damage from lead poisoning. A similar risk exists from eating soil near roads that existed before the phase-out of tetraethyllead or that were sprayed with oil (to settle dust) contaminated by toxic PCBs or dioxin. In addition to poisoning, a much greater risk exists of gastrointestinal obstruction or tearing in the stomach. Another risk of eating soil is the ingestion of animal feces and accompanying parasites. Cases of severe bacterial infections occurrence (leptospirosis) in patients diagnosed with pica have also been reported. Pica can also be found in animals such as dogs and cats.
Signs and symptoms
Pica is the consumption of substances with no significant nutritional value such as soap, plaster, or paint. Subtypes are characterized by the substance eaten:
This eating pattern should last at least one month to meet the time diagnostic criteria of pica.
Complications
Complications may occur due to the substance consumed. For example, lead poisoning may result from the ingestion of paint or paint-soaked plaster, hairballs may cause intestinal obstruction, and Toxoplasma or Toxocara infections may follow ingestion of feces or soil.
Causes
Pica is currently recognized as a mental disorder by the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). According to the DSM-5, mineral deficiencies are occasionally associated with pica, but biological abnormalities are rarely found. People practicing forms of pica, such as geophagy, pagophagy, and amylophagy, are more likely to be anemic or to have low hemoglobin concentration in their blood, lower levels of red blood cells (hematocrit), or lower plasma zinc levels. Specifically, practicing geophagy is more likely to be associated with anemia or low hemoglobin. Practicing pagophagy and amylophagy is more highly associated with anemia. Additionally, children and pregnant women may be more likely to have anemia or low hemoglobin relative to the general population.
Mental health conditions such as obsessive–compulsive disorder (OCD) and schizophrenia have been proposed as causes of pica. More recently, cases of pica have been tied to the obsessive–compulsive spectrum, and a move has arisen to consider OCD in the cause of pica. Sensory, physiological, cultural, and psychosocial perspectives have also been used to explain the causation of pica.
Pica may be a cultural practice not associated with a deficiency or disorder. Ingestion of kaolin (white clay) among African American women in the US state of Georgia shows the practice there to be a DSM-4 "culture-bound syndrome" and "not selectively associated with other psychopathology". Similar kaolin ingestion is also widespread in parts of Africa. Such practices may stem from purported health benefits, such as the ability of clay to absorb plant toxins and protect against toxic alkaloids and tannic acids.
Diagnosis
No single test confirms pica, but because pica can occur in people who have lower than normal nutrient levels and poor nutrition (malnutrition), the health care provider should test blood levels of iron and zinc.
Hemoglobin can also be checked to test for anemia. Lead levels should always be checked in children who may have eaten paint or objects covered in lead-paint dust. The healthcare provider should test and monitor for infection if the patient has been eating contaminated soil or animal waste.
DSM-5
The DSM-5 posits four criteria that must be met for a person to be diagnosed with pica:
Person must have been eating non-nutritive nonfoods for at least one month.
This eating must be considered abnormal for the person's stage of development.
Eating these substances cannot be associated with a cultural practice that is considered normal in the social context of the individual.
For people who currently have a medical condition (e.g.: pregnancy) or a mental disorder (e.g.: autism spectrum), the action of eating non-nutritive nonfoods should only be considered pica if it is dangerous and requires extra medical investigation or treatment on top of what they are already receiving for their pre-existing condition.
Differential diagnosis
In individuals with autism, schizophrenia, and certain physical disorders (such as Kleine–Levin syndrome), non-nutritive substances may be eaten. In such instances, pica should only be noted as an additional diagnosis if the eating behaviour is sufficiently persistent and severe to warrant additional clinical attention.
Treatment
Treatment for pica may vary by patient and suspected cause (e.g., child, developmentally disabled, pregnant, or psychogenic) and may emphasize psychosocial, environmental and family-guidance approaches; iron deficiency may be treatable through iron supplements or through dietary changes. An initial approach often involves screening for, and if necessary, treating any mineral deficiencies or other comorbid conditions. For pica that appears to be of psychogenic cause, therapy and medication such as SSRIs have been used successfully. However, previous reports have cautioned against the use of medication until all non-psychogenic causes have been ruled out.
Looking back at the different causes of pica related to assessment, the clinician tries to develop a treatment. First, there is pica as a result of social attention. A strategy might be used of ignoring the person's behavior or giving them the least possible attention. If their pica is a result of obtaining a favorite item, a strategy may be used where the person is able to receive the item or activity without eating inedible items. The individual's communication skills should increase so that they can relate what they want to another person without engaging in this behavior. If pica is a way for a person to escape an activity or situation, the reason why the person wants to escape the activity should be examined and the person should be moved to a new situation. If pica is motivated by sensory feedback, an alternative method of feeling that sensation should be provided. Other nonmedication techniques might include other ways for oral stimulation such as gum. Foods such as popcorn have also been found helpful. These things can be placed in a "pica box" that should be easily accessible to the individual when they feel like engaging in pica.
Behavior-based treatment options for pica can be useful for individuals who have a developmental disability or mental illness. Behavioral treatments have been shown to reduce pica severity by 80% in people with intellectual disabilities. These treatments may involve using positive reinforcement normal behavior. Many use aversion therapy, where the patient learns through positive reinforcement which foods are good and which ones they should not eat. Often, treatment is similar to the treatment of obsessive–compulsive or addictive disorders (such as exposure therapy). In some cases, treatment is as simple as addressing the fact they have this disorder and why they may have it. A recent study classified nine such classes of behavioral intervention: Success with treatment is generally high and generally fades with age, but it varies depending on the cause of the disorder. Developmental causes tend to have a lower success rate.
Epidemiology
The prevalence of pica is difficult to establish because of differences in definition and the reluctance of patients to admit to abnormal cravings and ingestion, thus leading to the prevalence recordings of pica among at-risk groups being in the range of 8% to 65% depending on the study. Based on compiled self-report and interview data of pregnant and postpartum women, pica is most prevalent geographically in Africa, with an estimated prevalence of 44.8%, followed by North and South America (23.0%) and Eurasia (17.5%). Factors associated with Pica in this population were determined to be anemia and low levels of education, both of which are associated with low socioeconomic backgrounds. Two studies of adults with intellectual disabilities living in institutions found that 21.8% and 25.8% of these groups had pica.
Prevalence rates for children are unknown. Young children commonly place non-nutritious material into their mouths. This activity occurs in 75% of 12-month-old infants, and 15% of two- to three-year-old children.
In institutionalized children with mental disabilities, pica occurs in 10–33%.
History
The condition currently known as pica was first described by Hippocrates.
The term pica originates in the Latin word for magpie, pīca, a bird famed for its unusual eating behaviors and believed to eat almost anything. The Latin may have been a translation of a Greek word meaning both 'magpie, jay' and 'pregnancy craving, craving for strange food'. In 13th-century Latin work, pica was referenced by the Greeks and Romans; however, it was not addressed in medical texts until 1563.
In the southern United States in the 1800s, geophagia was a common practice among the slave population. Geophagia is a form of pica in which the person consumes earthly substances such as clay, and is particularly prevalent to augment a mineral-deficient diet. Noteworthy is the fact that kaolin was consumed by West Africans enslaved in the Southeastern United States, particularly the Georgia belt, due to the antidiarrheal qualities in the treatment of dysentery and other abdominal ailments. The practice of consuming kaolin rocks was thereafter studied scientifically, the results of which led to the subsequent pharmaceutical commercialization of kaolinite, the clay mineral contained in kaolin. Kaolinite became the active ingredient in antidiarrheal drugs such as Kaopectate, although it was replaced by attapulgite in the 1980s and by bismuth subsalicylate starting in 2004.
Research on eating disorders from the 16th to the 20th centuries suggests that during that time in history, pica was regarded more as a symptom of other disorders rather than its own specific disorder. Even today, what could be classified as pica behavior is a normative practice in some cultures as part of their beliefs, healing methods, or religious ceremonies.
Prior to the elimination of the category of "feeding disorders in infancy and early childhood", which is where pica was classified, from the DSM-5, pica was primarily diagnosed in children. However, since the removal of the category, psychiatrists have started to diagnose pica in people of all ages.
The Glore Psychiatric Museum in Saint Joseph, Missouri has a 1910 exhibit with "an imaginative starburst arrangement of 1,446 buttons, screws, bolts, and nails that were eaten by a patient who died unexpectedly. They were only discovered during her autopsy."
Animals
Unlike in humans, pica in dogs or cats may be a sign of immune-mediated hemolytic anemia, especially when it involves eating substances such as tile grout, concrete dust, and sand. Dogs exhibiting this form of pica should be tested for anemia with a complete blood count or at least hematocrit levels. Although several hypotheses have been proposed by experts to explain pica in animals, insufficient evidence exists to prove or disprove any of them.
| Biology and health sciences | Mental disorders | Health |
192198 | https://en.wikipedia.org/wiki/Polio%20vaccine | Polio vaccine | Polio vaccines are vaccines used to prevent poliomyelitis (polio). Two types are used: an inactivated poliovirus given by injection (IPV) and a weakened poliovirus given by mouth (OPV). The World Health Organization (WHO) recommends all children be fully vaccinated against polio. The two vaccines have eliminated polio from most of the world, and reduced the number of cases reported each year from an estimated 350,000 in 1988 to 33 in 2018.
The inactivated polio vaccines are very safe. Mild redness or pain may occur at the site of injection. Oral polio vaccines cause about three cases of vaccine-associated paralytic poliomyelitis per million doses given. This compares with 5,000 cases per million who are paralysed following a polio infection. Both types of vaccine are generally safe to give during pregnancy and in those who have HIV/AIDS but are otherwise well. However, the emergence of circulating vaccine-derived poliovirus (cVDPV), a form of the vaccine virus that has reverted to causing poliomyelitis, has led to the development of novel oral polio vaccine type 2 (nOPV2) which aims to make the vaccine safer and thus stop further outbreaks of cVDPV.
The first successful demonstration of a polio vaccine was by Hilary Koprowski in 1950, with a live attenuated virus which people drank. The vaccine was not approved for use in the United States, but was used successfully elsewhere. The success of an inactivated (killed) polio vaccine, developed by Jonas Salk, was announced in 1955. Another attenuated live oral polio vaccine was developed by Albert Sabin and came into commercial use in 1961.
Polio vaccine is on the World Health Organization's List of Essential Medicines.
Medical uses
Interruption of person-to-person transmission of the virus by vaccination is important in global polio eradication, since no long-term carrier state exists for poliovirus in individuals with normal immune function, polio viruses have no non-primate reservoir in nature, and survival of the virus in the environment for an extended period appears to be remote. There are two types of vaccine: inactivated polio vaccine (IPV) and oral polio vaccine (OPV).
Inactivated
When the IPV (injection) is used, 90% or more of individuals develop protective antibodies to all three serotypes of polio virus after two doses of inactivated polio vaccine (IPV), and at least 99% are immune to poliovirus following three doses. The duration of immunity induced by IPV is not known with certainty, although a complete series is thought to protect for many years. IPV replaced the oral vaccine in many developed countries in the 1990s mainly due to the (small) risk of vaccine-derived polio in the oral vaccine.
Attenuated
Oral polio vaccines were easier to administer than IPV, as they eliminated the need for sterile syringes and therefore were more suitable for mass vaccination campaigns. OPV also provided longer-lasting immunity than the Salk vaccine, as it provides both humoral immunity and cell-mediated immunity.
One dose of trivalent OPV produces immunity to all three poliovirus serotypes in roughly 50% of recipients. Three doses of live-attenuated OPV produce protective antibodies to all three poliovirus types in more than 95% of recipients. As with other live-virus vaccines, immunity initiated by OPV is probably lifelong. OPV produces excellent immunity in the intestine, the primary site of wild poliovirus entry, which helps prevent infection with wild virus in areas where the virus is endemic. The oral administration does not require special medical equipment or extensive training. Attenuated poliovirus derived from the oral polio vaccine is excreted for a few days after vaccination, potentially infecting and thus indirectly inducing immunity in unvaccinated individuals, and thus amplifying the effects of the doses delivered. Taken together, these advantages have made it the favored vaccine of many countries, and it has long been preferred by the global eradication initiative.The primary disadvantage of OPV derives from its inherent nature. As an attenuated but active virus, it can induce vaccine-associated paralytic poliomyelitis (VAPP) in approximately one individual per every 2.7million doses administered. The live virus can circulate in under-vaccinated populations (termed either variant poliovirus or circulating vaccine-derived poliovirus, cVDPV) and over time can revert to a neurovirulent form causing paralytic polio. This genetic reversal of the pathogen to a virulent form takes a considerable time and does not affect the person who was originally vaccinated. With wild polio cases at record lows, 2017 was the first year where more cases of cVDPV were recorded than the wild poliovirus.
Until recent times, a trivalent OPV containing all three virus strains was used, and had nearly eradicated polio infection worldwide. With the complete eradication of wild poliovirus type2 this was phased out in 2016 and replaced with bivalent vaccine containing just types 1 and 3, supplemented with monovalent type2 OPV in regions where cVDPV type 2 was known to circulate. The switch to the bivalent vaccine and associated missing immunity against type 2 strains, among other factors, led to outbreaks of circulating vaccine-derived poliovirus type 2 (cVDPV2), which increased from 2 cases in 2016 to 1037 cases in 2020.
A novel OPV2 vaccine (nOPV2) which has been genetically modified to reduce the likelihood of disease-causing activating mutations was granted emergency licencing in 2021, and subsequently full licensure in December 2023. This has greater genetic stability than the traditional oral vaccine and is less likely to revert to a virulent form. Genetically stabilised vaccines targeting poliovirus types 1 and 3 are in development, with the intention that these will eventually completely replace the Sabin vaccines.
Schedule
In countries with endemic polio or where the risk of imported cases is high, the WHO recommends OPV vaccine at birth followed by a primary series of three OPV doses and at least one IPV dose starting at 6 weeks of age, with a minimum of 4 weeks between OPV doses. In countries with >90% immunization coverage and low risk of importation, the WHO recommends one or two IPV doses starting at 2 months of age followed by at least two OPV doses, with the doses separated by 4–8 weeks depending on the risk of exposure. In countries with the highest levels of coverage and the lowest risks of importation and transmission, the WHO recommends a primary series of three IPV injections, with a booster dose after an interval of six months or more if the first dose was administered before 2 months of age.
Side effects
The inactivated polio vaccines are very safe. Mild redness or pain may occur at the site of injection. They are generally safe to be given to pregnant women and those who have HIV/AIDS but are otherwise well.
Allergic reaction to the vaccine
Inactivated polio vaccine can cause an allergic reaction in a few people since the vaccine contains trace amounts of antibiotics, streptomycin, polymyxin B, and neomycin. It should not be given to anyone who has an allergic reaction to these medicines. Signs and symptoms of an allergic reaction, which usually appear within minutes or a few hours after receiving the injected vaccine, include breathing difficulties, weakness, hoarseness or wheezing, heart rate fluctuations, skin rash, and dizziness.
Vaccine-associated paralytic polio
A potential adverse effect of the Sabin OPV is caused by its known potential to recombine to a form that causes neurological infection and paralysis. The Sabin OPV results in vaccine-associated paralytic poliomyelitis (VAPP) in approximately one individual per every 2.7million doses administered, with symptoms identical to wild polio. Due to its improved genetic stability, the novel OPV (nOPV) has a reduced risk of this occurring.
Contamination concerns
In 1960, the rhesus monkey kidney cells used to prepare the poliovirus vaccines were determined to be infected with the simian virus-40 (SV40), which was also discovered in 1960 and is a naturally occurring virus that infects monkeys. In 1961, SV40 was found to cause tumors in rodents. More recently, the virus was found in certain forms of cancer in humans, for instance brain and bone tumors, pleural and peritoneal mesothelioma, and some types of non-Hodgkin lymphoma. However, SV40 has not been determined to cause these cancers.
SV40 was found to be present in stocks of the injected form of the IPV in use between 1955 and 1963. It is not found in the OPV form. Over 98 million Americans received one or more doses of polio vaccine between 1955 and 1963 when a proportion of vaccine was contaminated with SV40; an estimated 10–30 million Americans may have received a dose of vaccine contaminated with SV40. Later analysis suggested that vaccines produced by the former Soviet bloc countries until 1980, and used in the USSR, China, Japan, and several African countries, may have been contaminated, meaning hundreds of millions more may have been exposed to SV40.
In 1998, the National Cancer Institute undertook a large study, using cancer case information from the institute's SEER database. The published findings from the study revealed no increased incidence of cancer in persons who may have received vaccine containing SV40. Another large study in Sweden examined cancer rates of 700,000 individuals who had received potentially contaminated polio vaccine as late as 1957; the study again revealed no increased cancer incidence between persons who received polio vaccines containing SV40 and those who did not. The question of whether SV40 causes cancer in humans remains controversial, however, and the development of improved assays for detection of SV40 in human tissues will be needed to resolve the controversy.
During the race to develop an oral polio vaccine, several large-scale human trials were undertaken. By 1958, the National Institutes of Health had determined that OPV produced using the Sabin strains was the safest. Between 1957 and 1960, however, Hilary Koprowski continued to administer his vaccine around the world. In Africa, the vaccines were administered to roughly one million people in the Belgian territories (now the Democratic Republic of the Congo, Rwanda, and Burundi). The results of these human trials have been controversial, and unfounded accusations in the 1990s arose that the vaccine had created the conditions necessary for transmission of simian immunodeficiency virus from chimpanzees to humans, causing HIV/AIDS. These hypotheses, however, have been conclusively refuted. By 2004, cases of poliomyelitis in Africa had been reduced to just a small number of isolated regions in the western portion of the continent, with sporadic cases elsewhere. Recent local opposition to vaccination campaigns have evolved due to lack of adequate information, often relating to fears that the vaccine might induce sterility. The disease has since resurged in Nigeria and in several other African nations without necessary information, which epidemiologists believe is due to refusals by certain local populations to allow their children to receive the polio vaccine.
Manufacture
Inactivated
The Salk vaccine, IPV, is based on three wild, virulent reference strains, Mahoney (type 1 poliovirus), MEF-1 (type 2 poliovirus), and Saukett (type 3 poliovirus), grown in a type of monkey kidney tissue culture (Vero cell line), which are then inactivated with formalin. The injected Salk vaccine confers IgG-mediated immunity in the bloodstream, which prevents polio infection from progressing to viremia and protects the motor neurons, thus eliminating the risk of bulbar polio and post-polio syndrome.
In the United States, the vaccine is administered along with the tetanus, diphtheria, and acellular pertussis vaccines (DTaP) and a pediatric dose of hepatitis B vaccine. In the UK, IPV is combined with tetanus, diphtheria, pertussis, and Haemophilus influenzae type b vaccines.
Attenuated
OPV is an attenuated vaccine, produced by the passage of the virus through nonhuman cells at a subphysiological temperature, which produces spontaneous mutations in the viral genome. Oral polio vaccines were developed by several groups, one of which was led by Albert Sabin. Other groups, led by Hilary Koprowski and H.R. Cox, developed their attenuated vaccine strains. In 1958, the National Institutes of Health created a special committee on live polio vaccines. The various vaccines were carefully evaluated for their ability to induce immunity to polio while retaining a low incidence of neuropathogenicity in monkeys. Large-scale clinical trials performed in the Soviet Union in the late 1950s to early 1960s by Mikhail Chumakov and his colleagues demonstrated the safety and high efficacy of the vaccine. Based on these results, the Sabin strains were chosen for worldwide distribution. Fifty-seven nucleotide substitutions distinguish the attenuated Sabin 1 strain from its virulent parent (the Mahoney serotype), two nucleotide substitutions attenuate the Sabin 2 strain, and 10 substitutions are involved in attenuating the Sabin 3 strain. The primary attenuating factor common to all three Sabin vaccines is a mutation located in the virus's internal ribosome entry site, which alters stem-loop structures and reduces the ability of poliovirus to translate its RNA template within the host cell. The attenuated poliovirus in the Sabin vaccine replicates very efficiently in the gut, the primary site of infection and replication, but is unable to replicate efficiently within nervous system tissue. In 1961, type 1 and 2 monovalent oral poliovirus vaccine (MOPV) was licensed, and in 1962, type 3 MOPV was licensed. In 1963, trivalent OPV (TOPV) was licensed, and became the vaccine of choice in the United States and most other countries of the world, largely replacing the inactivated polio vaccine. A second wave of mass immunizations led to a further dramatic decline in the number of polio cases. Between 1962 and 1965, about 100 million Americans (roughly 56% of the population at that time) received the Sabin vaccine. The result was a substantial reduction in the number of poliomyelitis cases, even from the much-reduced levels following the introduction of the Salk vaccine.
OPV is usually provided in vials containing 10–20 doses of vaccine. A single dose of oral polio vaccine (usually two drops) contains 1,000,000 infectious units of Sabin 1 (effective against PV1), 100,000 infectious units of the Sabin 2 strain, and 600,000 infectious units of Sabin 3. The vaccine contains small traces of antibiotics—neomycin and streptomycin—but does not contain preservatives.
History
In a generic sense, vaccination works by priming the immune system with an 'immunogen'. Stimulating immune response, by use of an infectious agent, is known as immunization. The development of immunity to polio efficiently blocks person-to-person transmission of wild poliovirus, thereby protecting both individual vaccine recipients and the wider community.
The development of two polio vaccines led to the first modern mass inoculations. The last cases of paralytic poliomyelitis caused by endemic transmission of wild virus in the United States occurred in 1979, with an outbreak among the Amish in several Midwest states.
1930s
In the 1930s, poliovirus was perceived as especially terrifying, as little was known of how the disease was transmitted or how it could be prevented. This virus was also notable for primarily impacting affluent children, making it a prime target for vaccine development, despite its relatively low mortality and morbidity. Despite this, the community of researchers in the field thus far had largely observed an informal moratorium on any vaccine development as it was perceived to present too high a risk for too little likelihood of success.
This shifted in the early 1930s when American groups took up the challenge: Maurice Brodie led a team from the public health laboratory of the city of New York and John A. Kolmer collaborated with the Research Institute of Cutaneous Medicine in Philadelphia. The rivalry between these two researchers lent itself to a race-like mentality which, combined with a lack of oversight of medical studies, was reflected in the methodology and outcomes of each of these early vaccine development ventures.
Kolmer's live vaccine
Kolmer began his vaccine development project in 1932 and ultimately focused on producing an attenuated or live virus vaccine. Inspired by the success of vaccines for rabies and yellow fever, he hoped to use a similar process to denature the polio virus. In order to go about attenuating his polio vaccine, he repeatedly passed the virus through monkeys. Using methods of production that were later described as "hair-raisingly amateurish, the therapeutic equivalent of bath-tub gin," Kolmer ground the spinal cords of his infected monkeys and soaked them in a salt solution. He then filtered the solution through mesh, treated it with ricinolate, and refrigerated the product for 14 days to ultimately create what would later be prominently critiqued as a "veritable witches brew".
In keeping with the norms of the time, Kolmer completed a relatively small animal trial with 42 monkeys before proceeding to self experimentation in 1934. He tested his vaccine upon himself, his two children, and his assistant. He gave his vaccine to just 23 more children before declaring it safe and sending it out to doctors and health departments for a larger test of efficacy. By April 1935, he was able to report having tested the vaccine on 100 children without ill effect. Kolmer's first formal presentation of results would not come about until November 1935 where he presented the results of 446 children and adults he had vaccinated with his attenuated vaccine. He also reported that together the Research Institute of Cutaneous Medicine and the Merrell Company of Cincinnati (the manufacturer who held the patent for his ricinoleating process) had distributed 12,000 doses of vaccine to some 700 physicians across the United States and Canada. Kolmer did not describe any monitoring of this experimental vaccination program nor did he provide these physicians with instructions in how to administer the vaccine or how to report side effects. Kolmer dedicated the bulk of his publications thereafter to explaining what he believed to be the cause of the 10+ reported cases of paralytic polio following vaccination, in many cases in towns where no polio outbreak had occurred. Six of these cases had been fatal. Kolmer had no control group but asserted that many more children would have gotten sick.
Brodie's inactivated vaccine
At nearly the same time as Kolmer's project, Maurice Brodie had joined immunologist William H. Park at the New York City Health Department where they worked together on poliovirus. With the aid of grant funding from the President's Birthday Ball Commission (a predecessor to what would become the March of Dimes), Brodie was able to pursue the development of an inactivated or "killed virus" vaccine. Brodie's process also began by grinding the spinal cords of infectious monkeys and then treating the cords with various germicides, ultimately finding a solution of formaldehyde to be the most effective. By 1 June 1934, Brodie was able to publish his first scholarly article describing his successful induction of immunity in three monkeys with inactivated poliovirus. Through continued study on an additional 26 monkeys, Brodie ultimately concluded that administration of live virus vaccine tended to result in humoral immunity while administration of killed virus vaccine tended to result in tissue immunity.
Soon after, following a similar protocol to Kolmer, Brodie proceeded with self-experimentation upon himself and his co-workers at the NYC Health Department laboratory. Brodie's progress was eagerly covered by popular press as the public hoped for a successful vaccine to become available. Such reporting did not make mention of the 12 children in a New York City Asylum who were subjected to early safety trials. As none of the subjects experienced ill effects, Park, described by contemporaries as "never one to let grass grow under his feet," declared the vaccine safe. When a severe polio outbreak overwhelmed Kern County, California it became the first trial site for the new vaccine on very short notice. Between November 1934 - May 1935, over 1,500 doses of the vaccine were administered in Kern County. While initial results were very promising, insufficient staffing and poor protocol design left Brodie open to criticism when he published the California results in August 1935. Through private physicians, Brodie also conducted a broader field study, including 9,000 children who received the vaccine and 4,500 age- and location-matched controls who did not receive a vaccine. Again, the results were promising. Of those who received the vaccine, only a few went on to develop polio. Most had been exposed before vaccination and none had received the full series of vaccine doses being studied. Additionally, a polio epidemic in Raleigh, North Carolina provided an opportunity for the U.S. Public Health Service to conduct a highly structured trial of the Brodie vaccine using funding from the Birthday Ball Commission.
Academic reception
While their work was ongoing, the larger community of bacteriologists began to raise concerns regarding the safety and efficacy of the new poliovirus vaccines. At this time there was very little oversight of medical studies and the ethical treatment of study participants largely relied upon moral pressure from peer academic scientists. Brodie's inactivated vaccines faced scrutiny from many who felt killed virus vaccines could not be efficacious. While researchers were able to replicate the tissue immunity he had produced in his animal trials, the prevailing wisdom was that humoral immunity was essential for an efficacious vaccine. Kolmer directly questioned the killed virus approach in scholarly journals. Kolmer's studies however had raised even more concern with increasing reports of children becoming paralysed following vaccination with his live virus vaccine and notably, with paralysis beginning at the arm rather than the foot in many cases. Both Kolmer and Brodie were called to present their research at the Annual Meeting of the American Public Health Association in Milwaukee WI in October 1935. Additionally, Thomas M. Rivers was asked to discuss each of the presented papers as a prominent critic of the vaccine development effort. This resulted in the APHA arranging a Symposium on Poliomyelitis to be delivered at the Annual Meeting of their Southern Branch the following month. It was during the discussion at this meeting that James Leake of the U.S. Public Health Service stood to immediately present clinical evidence that the Kolmer vaccine had caused several deaths and then allegedly accused Kolmer of being a murderer. As Rivers recalled in his oral history, "All hell broke loose, and it seemed as if everybody was trying to talk at the same time...Jimmy Leake used the strongest language that I have ever heard used at a scientific meeting." In response to the attacks from all sides, Brodie was reported to have stood up and stated, "It looks as though, according to Dr. Rivers, my vaccine is no good, and, according to Dr. Leake, Dr Kolmer's is dangerous." Kolmer simply responded by stating, "Gentlemen, this is one time I wish the floor would open up and swallow me." Ultimately, Kolmer's live vaccine was undoubtedly shown to be dangerous and had already been withdrawn in September 1935 before the Milwaukee meeting. While the consensus of the symposium was largely skeptical of the efficacy of Brodie's vaccine, its safety was not in question and the recommendation was for a much larger well-controlled trial. However, when three children became ill with paralytic polio following a dose of the vaccine, the directors of the Warm Springs Foundation in Georgia (acting as the primary funders for the project) requested it be withdrawn in December 1935. Following its withdrawal, the previously observed moratorium on human poliomyelitis vaccine development resumed and there would not be another attempt for nearly 20 years.
While Brodie had arguably made the most progress in the pursuit of a poliovirus vaccine, he suffered the most significant career repercussions due to his status as a less widely known researcher. Modern researchers recognize that Brodie may well have developed an effective polio vaccine, however, the basic science and technology of the time were insufficient to understand and utilize this breakthrough. Brodie's work using formalin-inactivated virus would later become the basis for the Salk vaccine, but he would not live to see this success. Brodie was fired from his position within three months of the symposium's publication. While he was able to find another laboratory position, he died of a heart attack only three years later at age 36. By contrast, Park, who was believed in the community to be reaching senility at this point in his older age, was able to retire from his position with honors before he died in 1939. Kolmer, already an established and well-respected researcher, returned to Temple University as a professor of medicine. Kolmer had a very productive career, receiving multiple awards, and publishing countless papers, articles, and textbooks up until his retirement in 1957.
1948
A breakthrough came in 1948 when a research group headed by John Enders at the Children's Hospital Boston successfully cultivated the poliovirus in human tissue in the laboratory. This group had recently successfully grown mumps in cell culture. In March 1948, Thomas H. Weller was attempting to grow varicella virus in embryonic lung tissue. He had inoculated the planned number of tubes when he noticed that there were a few unused tubes. He retrieved a sample of mouse brain infected with poliovirus and added it to the remaining test tubes, on the off chance that the virus might grow. The varicella cultures failed to grow, but the polio cultures were successful. This development greatly facilitated vaccine research and ultimately allowed for the development of vaccines against polio. Enders and his colleagues, Thomas H. Weller and Frederick C. Robbins, were recognized in 1954 for their efforts with a Nobel Prize in Physiology or Medicine. Other important advances that led to the development of polio vaccines were: the identification of three poliovirus serotypes (Poliovirus type 1 – PV1, or Mahoney; PV2, Lansing; and PV3, Leon); the finding that before paralysis, the virus must be present in the blood; and the demonstration that administration of antibodies in the form of gamma globulin protects against paralytic polio.
1950–1955
During the early 1950s, polio rates in the U.S. were above 25,000 annually; in 1952 and 1953, the U.S. experienced an outbreak of 58,000 and 35,000 polio cases, respectively, up from a typical number of some 20,000 a year, with deaths in those years numbering 3,200 and 1,400. Amid this U.S. polio epidemic, millions of dollars were invested in finding and marketing a polio vaccine by commercial interests, including Lederle Laboratories in New York under the direction of H. R. Cox. Also working at Lederle was Polish-born virologist and immunologist Hilary Koprowski of the Wistar Institute in Philadelphia, who tested the first successful polio vaccine, in 1950. His vaccine, however, being a live attenuated virus taken orally, was still in the research stage and would not be ready for use until five years after Jonas Salk's polio vaccine (a dead-virus injectable vaccine) had reached the market. Koprowski's attenuated vaccine was prepared by successive passages through the brains of Swiss albino mice. By the seventh passage, the vaccine strains could no longer infect nervous tissue or cause paralysis. After one to three further passages on rats, the vaccine was deemed safe for human use. On 27 February 1950, Koprowski's live, attenuated vaccine was tested for the first time on an 8-year-old boy living at Letchworth Village, an institution for physically and mentally disabled people located in New York. After the child had no side effects, Koprowski enlarged his experiment to include 19 other children.
Jonas Salk
The first effective polio vaccine was developed in 1952 by Jonas Salk and a team at the University of Pittsburgh that included Julius Youngner, Byron Bennett, L. James Lewis, and Lorraine Friedman, which required years of subsequent testing. Salk went on CBS radio to report a successful test on a small group of adults and children on 26 March 1953; two days later, the results were published in JAMA. Leone N. Farrell invented a key laboratory technique that enabled the mass production of the vaccine by a team she led in Toronto. Beginning 23 February 1954, the vaccine was tested at Arsenal Elementary School and the Watson Home for Children in Pittsburgh, Pennsylvania.
Salk's vaccine was then used in a test called the Francis Field Trial, led by Thomas Francis, the largest medical experiment in history at that time. The test began with about 4,000 children at Franklin Sherman Elementary School in McLean, Virginia, and eventually involved 1.8 million children, in 44 states from Maine to California. By the conclusion of the study, roughly 440,000 received one or more injections of the vaccine, about 210,000 children received a placebo, consisting of harmless culture media, and 1.2 million children received no vaccination and served as a control group, who would then be observed to see if any contracted polio.
The results of the field trial were announced on 12 April 1955 (the tenth anniversary of the death of President Franklin D. Roosevelt, whose paralytic illness was generally believed to have been caused by polio). The Salk vaccine had been 60–70% effective against PV1 (poliovirus type 1), over 90% effective against PV2 and PV3, and 94% effective against the development of bulbar polio. Soon after Salk's vaccine was licensed in 1955, children's vaccination campaigns were launched. In the U.S., following a mass immunization campaign promoted by the March of Dimes, the annual number of polio cases fell from 35,000 in 1953 to 5,600 by 1957. By 1961 only 161 cases were recorded in the United States.
A week before the announcement of the Francis Field Trial results in April 1955, Pierre Lépine at the Pasteur Institute in Paris had also announced an effective polio vaccine.
Safety incidents
In April 1955, soon after mass polio vaccination began in the US, the Surgeon General began to receive reports of patients who contracted paralytic polio about a week after being vaccinated with the Salk polio vaccine from the Cutter pharmaceutical company, with the paralysis starting in the limb the vaccine was injected into. The Cutter vaccine had been used in vaccinating 409,000 children in the western and midwestern United States.
Later investigations showed that the Cutter vaccine had caused 260 cases of polio, killing 11.
In response, the Surgeon General pulled all polio vaccines made by Cutter Laboratories from the market, but not before 260 cases of paralytic illness had occurred. Eli Lilly, Parke-Davis, Pitman-Moore, and Wyeth polio vaccines were also reported to have paralyzed numerous children. It was soon discovered that some lots of Salk polio vaccine made by Cutter, Wyeth, and the other labs had not been properly inactivated, allowing live poliovirus into more than 100,000 doses of vaccine. In May 1955, the National Institutes of Health and Public Health Services established a Technical Committee on Poliomyelitis Vaccine to test and review all polio vaccine lots and advise the Public Health Service as to which lots should be released for public use. These incidents reduced public confidence in the polio vaccine, leading to a drop in vaccination rates.
1961
At the same time that Salk was testing his vaccine, both Albert Sabin and Hilary Koprowski continued working on developing a vaccine using live virus. During a meeting in Stockholm to discuss polio vaccines in November 1955, Sabin presented results obtained on a group of 80 volunteers, while Koprowski read a paper detailing the findings of a trial enrolling 150 people. Sabin and Koprowski both eventually succeeded in developing vaccines. Because of the commitment to the Salk vaccine in America, Sabin and Koprowski both did their testing outside the United States, Sabin in Mexico and the Soviet Union, Koprowski in the Congo and Poland. In 1957, Sabin developed a trivalent vaccine containing attenuated strains of all three types of poliovirus. In 1959, ten million children in the Soviet Union received the Sabin oral vaccine. For this work, Sabin was given the medal of the Order of Friendship of Peoples, described as the Soviet Union's highest civilian honor. Sabin's oral vaccine using live virus came into commercial use in 1961.
Once Sabin's oral vaccine became widely available, it supplanted Salk's injected vaccine, which had been tarnished in the public's opinion by the Cutter incident of 1955, in which Salk vaccines improperly prepared by one company resulted in several children dying or becoming paralyzed.
1987
An enhanced-potency IPV was licensed in the United States in November 1987, and is currently the vaccine of choice there. The first dose of the polio vaccine is given shortly after birth, usually between 1 and 2 months of age, and a second dose is given at 4 months of age. The timing of the third dose depends on the vaccine formulation but should be given between 6 and 18 months of age. A booster vaccination is given at 4 to 6 years of age, for a total of four doses at or before school entry. In some countries, a fifth vaccination is given during adolescence. Routine vaccination of adults (18 years of age and older) in developed countries is neither necessary nor recommended because most adults are already immune and have a very small risk of exposure to wild poliovirus in their home countries. In 2002, a pentavalent (five-component) combination vaccine (called Pediarix) containing IPV was approved for use in the United States.
1988
A global effort to eradicate polio, led by the World Health Organization (WHO), UNICEF, and the Rotary Foundation, began in 1988, and has relied largely on the oral polio vaccine developed by Albert Sabin and Mikhail Chumakov (Sabin-Chumakov vaccine).
After 1990
Polio was eliminated in the Americas by 1994. The disease was officially eliminated in 36 Western Pacific countries, including China and Australia, in 2000. Europe was declared polio-free in 2002. Since January 2011, no cases of the disease have been reported in India, hence in February 2012, the country was taken off the WHO list of polio-endemic countries. In March 2014, India was declared a polio-free country.
Although poliovirus transmission has been interrupted in much of the world, transmission of wild poliovirus does continue and creates an ongoing risk for the importation of wild poliovirus into previously polio-free regions. If importations of poliovirus occur, outbreaks of poliomyelitis may develop, especially in areas with low vaccination coverage and poor sanitation. As a result, high levels of vaccination coverage must be maintained. In November 2013, the WHO announced a polio outbreak in Syria. In response, the Armenian government put out a notice asking Syrian Armenians under age 15 to get the polio vaccine. As of 2014, polio virus had spread to 10 countries, mainly in Africa, Asia, and the Middle East, with Pakistan, Syria, and Cameroon advising vaccinations to outbound travellers.
Polio vaccination programs have been resisted by some people in Pakistan, Afghanistan, and Nigeria - the three countries as of 2017 with remaining polio cases. Almost all Muslim religious and political leaders have endorsed the vaccine, but a fringe minority believes that the vaccines are secretly being used for the sterilisation of Muslims. The fact that the CIA organized a fake vaccination program in 2011 to help find Osama bin Laden is an additional cause of distrust. In 2015, the WHO announced a deal with the Taliban to encourage them to distribute the vaccine in areas they control. However, the Pakistani Taliban was not supportive. On 11 September 2016, two unidentified gunmen associated with the Pakistani Taliban, Jamaat-ul-Ahrar, shot Zakaullah Khan, a doctor who was administering polio vaccines in Pakistan. The leader of the Jamaat-ul-Ahrar claimed responsibility for the shooting and stated that the group would continue this type of attack. Such resistance to and skepticism of vaccinations has consequently slowed down the polio eradication process within the two remaining endemic countries.
Travel requirements
Travellers who wish to enter or leave certain countries must be vaccinated against polio, usually at most 12 months and at least 4 weeks before crossing the border, and be able to present a vaccination record/certificate at the border checks. Most requirements apply only to travel to or from so-called 'polio-endemic', 'polio-affected', 'polio-exporting', 'polio-transmission', or 'high-risk' countries. As of August 2020, Afghanistan and Pakistan are the only polio-endemic countries in the world (where wild polio has not yet been eradicated). Several countries have additional precautionary polio vaccination travel requirements, for example to and from 'key at-risk countries', which as of December 2020 include China, Indonesia, Mozambique, Myanmar, and Papua New Guinea.
Society and culture
Cost
, the Global Alliance for Vaccines and Immunization supplies the inactivated vaccine to developing countries for as little as (about ) per dose in 10-dose vials.
Misconceptions
A misconception has been present in Pakistan that the polio vaccine contains haram ingredients and could cause impotence and infertility in male children, leading some parents not to have their children vaccinated. This belief is most common in the Khyber Pakhtunkhwa province and the FATA region. Attacks on polio vaccination teams have also occurred, thereby hampering international efforts to eradicate polio in Pakistan and globally.
| Biology and health sciences | Drugs and pharmacology | null |
192225 | https://en.wikipedia.org/wiki/Opiliones | Opiliones | The Opiliones (formerly Phalangida) are an order of arachnids,
colloquially known as harvestmen, harvesters, harvest spiders, or daddy longlegs. , over 6,650 species of harvestmen have been discovered worldwide, although the total number of extant species may exceed 10,000. The order Opiliones includes five suborders: Cyphophthalmi, Eupnoi, Dyspnoi, Laniatores, and Tetrophthalmi, which were named in 2014.
Representatives of each extant suborder can be found on all continents except Antarctica.
Well-preserved fossils have been found in the 400-million-year-old Rhynie cherts of Scotland, and 305-million-year-old rocks in France. These fossils look surprisingly modern, indicating that their basic body shape developed very early on, and, at least in some taxa, has changed little since that time.
Their phylogenetic position within the Arachnida is disputed; their closest relatives may be camel spiders (Solifugae) or a larger clade comprising horseshoe crabs, Ricinulei, and Arachnopulmonata (scorpions, pseudoscorpions, and Tetrapulmonata). Although superficially similar to and often misidentified as spiders (order Araneae), the Opiliones are a distinct order that is not closely related to spiders. They can be easily distinguished from long-legged spiders by their fused body regions and single pair of eyes in the middle of the cephalothorax. Spiders have a distinct abdomen that is separated from the cephalothorax by a constriction, and they have three to four pairs of eyes, usually around the margins of the cephalothorax.
English speakers may colloquially refer to species of Opiliones as "daddy longlegs" or "granddaddy longlegs", but this name is also used for two other distantly related groups of arthropods, the crane flies of the superfamily Tipuloidea, and the cellar spiders of the family Pholcidae, (commonly referred to as "daddy long-leg spiders") most likely because of their similar appearance. Harvestmen are also referred to as "shepherd spiders" in reference to how their unusually long legs reminded observers of the ways that some European shepherds used stilts to better observe their wandering flocks from a distance.
Description
The Opiliones are known for having exceptionally long legs relative to their body size; however, some species are short-legged. As in all Arachnida, the body in the Opiliones has two tagmata, the anterior cephalothorax or prosoma, and the posterior 10-segmented abdomen or opisthosoma. The most easily discernible difference between harvestmen and spiders is that in harvestmen, the connection between the cephalothorax and abdomen is broad, so that the body appears to be a single oval structure. Other differences include the fact that Opiliones have no venom glands in their chelicerae and thus pose no danger to humans.
They also have no silk glands and therefore do not build webs. In some highly derived species, the first five abdominal segments are fused into a dorsal shield called the scutum, which in most such species is fused with the carapace. Some such Opiliones only have this shield in the males. In some species, the two posterior abdominal segments are reduced. Some of them are divided medially on the surface to form two plates beside each other. The second pair of legs is longer than the others and function as antennae or feelers. In short-legged species, this may not be obvious.
The feeding apparatus (stomotheca) differs from most arachnids in that Opiliones can swallow chunks of solid food, not only liquids. The stomotheca is formed by extensions of the coxae of the pedipalps and the first pair of legs.
Most Opiliones, except for Cyphophthalmi, have long been thought to have a single pair of camera-type eyes in the middle of the head, oriented sideways. Eyes in Cyphophthalmi, when present, are located laterally, near the ozopores. A 305-million-year-old fossilized harvestman with two pairs of eyes was reported in 2014. This find suggested that the eyes in Cyphophthalmi are not homologous to the eyes of other harvestmen. Many cave-adapted species are eyeless, such as the Brazilian Caecobunus termitarum (Grassatores) from termite nests, Giupponia chagasi (Gonyleptidae) from caves, most species of Cyphophthalmi, and all species of the Guasiniidae. However, recent work studying the embryonic development of the species Phalangium opilio and some Laniatores revealed that harvestman in addition to a pair median eyes also have two sets of vestigial eyes: one median pair (homologous to those of horseshoe crabs and sea spiders), and one lateral pair (homologous to facetted eyes of horseshoe crabs and insects). This discovery suggests that the neuroanatomy of harvestmen is more primitive than derived arachnid groups, like spiders and scorpions. It also showed that the four-eyed fossil harvestman previously discovered is most likely a member of the suborder Eupnoi (true daddy-longlegs).
Harvestmen have a pair of prosomatic defensive scent glands (ozopores) that secrete a peculiar-smelling fluid when disturbed. In some species, the fluid contains noxious quinones. They do not have book lungs, and breathe through tracheae. A pair of spiracles is located between the base of the fourth pair of legs and the abdomen, with one opening on each side. In more active species, spiracles are also found upon the tibia of the legs. They have a gonopore on the ventral cephalothorax, and the copulation is direct as male Opiliones have a penis, unlike other arachnids. All species lay eggs.
Typical body length does not exceed , and some species are smaller than 1 mm, although the largest known species, Trogulus torosus (Trogulidae), grows as long as . The leg span of many species is much greater than the body length and sometimes exceeds and to in Southeast Asia. Most species live for a year.
Behavior
Many species are omnivorous, eating primarily small insects and all kinds of plant material and fungi. Some are scavengers, feeding upon dead organisms, bird dung, and other fecal material. Such a broad range is unusual in arachnids, which are typically pure predators. Most hunting harvestmen ambush their prey, although active hunting is also found. Because their eyes cannot form images, they use their second pair of legs as antennae to explore their environment. Unlike most other arachnids, harvestmen do not have a sucking stomach or a filtering mechanism. Rather, they ingest small particles of their food, thus making them vulnerable to internal parasites such as gregarines.
Although parthenogenetic species do occur, most harvestmen reproduce sexually. Except from small fossorial species in the suborder Cyphophthalmi, where the males deposit a spermatophore, mating involves direct copulation. The females store the sperm, which is aflagellate and immobile, at the tip of her ovipositor. The eggs are fertilized during oviposition. The males of some species offer a secretion (nuptial gift) from their chelicerae to the female before copulation. Sometimes, the male guards the female after copulation, and in many species, the males defend territories. In some species, males also exhibit post-copulatory behavior in which the male specifically seeks out and shakes the female's sensory leg. This is believed to entice the female into mating a second time.
The female lays her eggs shortly after mating to several months later. Some species build nests for this purpose. A unique feature of harvestmen is that some species practice parental care, in which the male is solely responsible for guarding the eggs resulting from multiple partners, often against egg-eating females, and cleaning the eggs regularly. Paternal care has evolved at least three times independently: once in the clade Progonyleptoidellinae + Caelopyginae, once in the Gonyleptinae, and once in the Heteropachylinae. Maternal care in opiliones probably evolved due to natural selection, while paternal care appears to be the result of sexual selection. Depending on circumstances such as temperature, the eggs may hatch at any time after the first 20 days, up to about half a year after being laid. Harvestmen variously pass through four to eight nymphal instars to reach maturity, with most known species having six instars.
Most species are nocturnal and colored in hues of brown, although a number of diurnal species are known, some of which have vivid patterns in yellow, green, and black with varied reddish and blackish mottling and reticulation.
Many species of harvestmen easily tolerate members of their own species, with aggregations of many individuals often found at protected sites near water. These aggregations may number 200 individuals in the Laniatores, and more than 70,000 in certain Eupnoi. Gregarious behavior is likely a strategy against climatic odds, but also against predators, combining the effect of scent secretions, and reducing the probability of any particular individual being eaten.
Harvestmen clean their legs after eating by drawing each leg in turn through their jaws.
Antipredator defences
Predators of harvestmen include a variety of animals, including some mammals, amphibians, and other arachnids like spiders and scorpions. Opiliones display a variety of primary and secondary defences against predation, ranging from morphological traits such as body armour to behavioral responses to chemical secretions. Some of these defences have been attributed and restricted to specific groups of harvestmen.
Primary defences
Primary defences help the harvestmen avoid encountering a potential predator and include crypsis, aposematism, and mimicry.
Crypsis
Particular patterns or colour markings on harvestmen's bodies can reduce detection by disrupting the animals' outlines or providing camouflage. Markings on legs can cause an interruption of the leg outline and loss of leg proportion recognition. Darker colourations and patterns function as camouflage when they remain motionless. Within the genus Leiobunum are multiple species with cryptic colouration that changes over ontogeny to match the microhabitat used at each life stage. Many species have also been able to camouflage their bodies by covering with secretions and debris from the leaf litter found in their environments. Some hard-bodied harvestmen have epizoic cyanobacteria and liverworts growing on their bodies that suggest potential benefits for camouflage against large backgrounds to avoid detection by diurnal predators.
Aposematism and mimicry
Some harvestmen have elaborate and brightly coloured patterns or appendages which contrast with the body colouration, potentially serving as an aposematic warning to potential predators. This mechanism is thought to be commonly used during daylight, when they could be easily seen by any predators.
Other harvestmen may exhibit mimicry to resemble other species' appearances. Some Gonyleptidae individuals that produce translucid secretions have orange markings on their carapaces. This may have an aposematic role by mimicking the colouration of glandular emissions of two other quinone-producing species. Mimicry (Müllerian mimicry) occurring between Brazilian harvestmen that resemble others could be explained by convergent evolution.
Secondary defences
Secondary defences allow for harvestmen to escape and survive from a predator after direct or indirect contact, including thanatosis, freezing, bobbing, autotomy, fleeing, stridulation, retaliation and chemical secretions.
Thanatosis
Some animals respond to attacks by simulating an apparent death to avoid either detection or further attacks. Arachnids such as spiders practise this mechanism when threatened or even to avoid being eaten by female spiders after mating. Thanatosis is used as a second line of defence when detected by a potential predator and is commonly observed within the Dyspnoi and Laniatores suborders, with individuals becoming rigid with legs either retracted or stretched.
Freezing
Freezing – or the complete halt of movement – has been documented in the family Sclerosomatidae. While this can mean an increased likelihood of immediate survival, it also leads to reduced food and water intake.
Bobbing
To deflect attacks and enhance escape, long-legged species – commonly known as daddy long-legs – from the Eupnoi suborder, use two mechanisms. One is bobbing, for which these particular individuals bounce their bodies. It potentially serves to confuse and deflect any identification of the exact location of their bodies. This can be a deceiving mechanism to avoid predation when they are in a large aggregation of individuals, which are all trembling at the same time. Cellar spiders (Pholcidae) that are commonly mistaken for daddy long-legs (Opiliones) also exhibit this behavior when their webs are disturbed or even during courtship.
Autotomy
Autotomy is the voluntary amputation of an appendage and is employed to escape when restrained by a predator. Eupnoi individuals, more specifically sclerosomatid harvestmen, commonly use this strategy in response to being captured. This strategy can be costly because harvestmen do not regenerate their legs, and leg loss reduces locomotion, speed, climbing ability, sensory perception, food detection, and territoriality.
Autotomised legs provide a further defence from predators because they can twitch for 60 seconds to an hour after detachment. This can also potentially serve as deflection from an attack and deceive a predator from attacking the animal. It has been shown to be successful against ants and spiders.
The legs continue to twitch after they are detached because 'pacemakers' are located in the ends of the first long segment (femur) of their legs. These pacemakers send signals via the nerves to the muscles to extend the leg and then the leg relaxes between signals. While some harvestman's legs twitch for a minute, others have been recorded to twitch up to an hour. The twitching has been hypothesised to function as an evolutionary advantage by keeping the attention of a predator while the harvestman escapes.
Fleeing
Individuals that are able to detect potential threats can flee rapidly from attack. This is seen with multiple long-legged species in the Leiobunum clade that either drop and run, or drop and remain motionless. This is also seen when disturbing an aggregation of multiple individuals, where they all scatter.
Stridulation
Multiple species within the Laniatores and Dyspnoi possess stridulating organs, which are used as intraspecific communication and have also been shown to be used as a second line of defense when restrained by a predator.
Retaliation
Armored harvestmen in Laniatores can often use their modified morphology as weapons. Many have spines on their pedipalps, back legs, or bodies. By pinching with their chelicerae and pedipalps, they can cause harm to a potential predator. Also this has been proven to increase survival against recluse spiders by causing injury, allowing the harvestman to escape from predation.
Chemical
Harvestmen are well known for being chemically protected. They exude strongly odored secretions from their scent glands, called ozopores, that act as a shield against predators; this is the most effective defense they use which creates a strong and unpleasant taste. In Cyphophthalmi the scent glands release naphthoquinones, chloro-naphthoquinones and aliphatic methyl ketones, Insidiatores use nitrogen-containing substances, terpenes, aliphatic ketones, and phenolics, while Grassatores use alkylated phenolics and benzoquinones, and Palpatores use substances like naphthoquinones, methyl- and ethyl-ketones. These secretions have successfully protected the harvestmen against wandering spiders (Ctenidae), wolf spiders (Lycosidae) and Formica exsectoides ants. However, these chemical irritants are not able to prevent four species of harvestmen being preyed upon by the black scorpion Bothriurus bonariensis (Bothriuridae). These secretions contain multiple volatile compounds that vary among individuals and clades.
Endangered status
All troglobitic species (of all animal taxa) are considered to be at least threatened in Brazil. Four species of Opiliones are on the Brazilian national list of endangered species, all of them cave-dwelling: Giupponia chagasi, Iandumoema uai, Pachylospeleus strinatii and Spaeleoleptes spaeleus.
Several Opiliones in Argentina appear to be vulnerable, if not endangered. These include Pachyloidellus fulvigranulatus, which is found only on top of Cerro Uritorco, the highest peak in the Sierras Chicas chain (provincia de Cordoba) and Pachyloides borellii is in rainforest patches in northwest Argentina which are in an area being dramatically destroyed by humans. The cave-living Picunchenops spelaeus is apparently endangered through human action. So far, no harvestman has been included in any kind of a Red List in Argentina, so they receive no protection.
Maiorerus randoi has only been found in one cave in the Canary Islands. It is included in the Catálogo Nacional de especies amenazadas (National catalog of threatened species) from the Spanish government.
Texella reddelli and Texella reyesi are listed as endangered species in the United States. Both are from caves in central Texas. Texella cokendolpheri from a cave in central Texas and Calicina minor, Microcina edgewoodensis, Microcina homi, Microcina jungi, Microcina leei, Microcina lumi, and Microcina tiburona from around springs and other restricted habitats of central California are being considered for listing as endangered species, but as yet receive no protection.
Misconception
An urban legend claims that the harvestman is the most venomous animal in the world but possesses fangs too short or a mouth too round and small to bite a human, rendering it harmless (the same myth applies to Pholcus phalangioides and the crane fly, which are both also called a "daddy longlegs"). None of the known species of harvestmen have venom glands; their chelicerae are not hollowed fangs but grasping claws that are typically very small and not strong enough to break human skin.
Research
Harvestmen are a scientifically neglected group. Description of new taxa has always been dependent on the activity of a few dedicated taxonomists. Carl Friedrich Roewer described about a third (2,260) of today's known species from the 1910s to the 1950s, and published the landmark systematic work (Harvestmen of the World) in 1923, with descriptions of all species known to that time. Other important taxonomists in this field include:
Pierre André Latreille (18th century)
Carl Ludwig Koch, Maximilian Perty (1830s–1850s)
L. Koch, Tord Tamerlan Teodor Thorell (1860s–1870s)
Eugène Simon, William Sørensen (1880s–1890s)
James C. Cokendolpher, Raymond Forster, Clarence and Marie Goodnight, Jürgen Gruber, Reginald Frederick Lawrence, Jochen Martens, Cândido Firmino de Mello-Leitão (20th century)
Gonzalo Giribet, Adriano Brilhante Kury, Tone Novak (21st century)
Since the 1990s, study of the biology and ecology of harvestmen has intensified, especially in South America.
Early work on the developmental biology of Opiliones from the mid-20th century was resurrected by Prashant P. Sharma, who established Phalangium opilio as a model system for the study of arachnid comparative genomics and evolutionary-developmental biology.
Phylogeny
Harvestmen are ancient arachnids. Fossils from the Devonian Rhynie chert, 410 million years ago, already show characteristics like tracheae and sexual organs, indicating that the group has lived on land since that time. Despite being similar in appearance to, and often confused with, spiders, they are probably closely related to the scorpions, pseudoscorpions, and solifuges; these four orders form the clade Dromopoda. The Opiliones have remained almost unchanged morphologically over a long period. Indeed, one species discovered in China, Mesobunus martensi, fossilized by fine-grained volcanic ash around 165 million years ago, is hardly discernible from modern-day harvestmen and has been placed in the extant family Sclerosomatidae.
Etymology
The Swedish naturalist and arachnologist Carl Jakob Sundevall (1801–1875) honored the naturalist Martin Lister (1638–1712) by adopting Lister's term Opiliones for this order, known in Lister's days as "harvest spiders" or "shepherd spiders", from Latin opilio, "shepherd"; Lister characterized three species from England (although not formally describing them, being a pre-Linnaean work).
In England, the Opiliones are called harvestmen, not because they appear at that season, but from a superstitious belief that if one is killed there will be a bad harvest that year.
Systematics
The interfamilial relationships within Opiliones are not yet fully resolved, although significant strides have been made in recent years to determine these relationships. The following list is a compilation of interfamilial relationships recovered from several recent phylogenetic studies, although the placement and even monophyly of several taxa are still in question.
Suborder Cyphophthalmi Simon, 1879 (about 200 species)
Infraorder Boreophthalmi Giribet, 2012
Family Sironidae Simon, 1879
Family Stylocellidae Hansen & Sørensen, 1904
Infraorder Scopulophthalmi Giribet, 2012
Family Pettalidae Shear, 1980
Infraorder Sternophthalmi Giribet, 2012
Family Troglosironidae Shear, 1993
Superfamily Ogoveoidea Shear, 1980
Family Neogoveidae Shear, 1980
Family Ogoveidae Shear, 1980
Infraorder (indet).
Family Parasironidae Karaman, Mitov & Snegovaya, 2024
Suborder Eupnoi Hansen & Sørensen, 1904 (about 1,800 species)
Superfamily Caddoidea Banks, 1892
Family Caddidae Banks, 1892
Superfamily Phalangioidea Latreille, 1802
Family Globipedidae Kury & Cokendolpher, 2020
Family Neopilionidae Lawrence, 1931
Family Phalangiidae Latreille, 1802
Family Protolophidae Banks, 1893
Family Sclerosomatidae Simon, 1879
Suborder Dyspnoi Hansen & Sørensen, 1904 (about 400 species)
Superfamily Acropsopilionoidea Roewer, 1923
Family Acropsopilionidae Roewer, 1923
Superfamily Ischyropsalidoidea Simon, 1879
Family Ischyropsalididae Simon, 1879
Family Sabaconidae Dresco, 1970
Family Taracidae Schönhofer, 2013
Superfamily Troguloidea Sundevall, 1833
Family Dicranolasmatidae Simon, 1879
Family Nemastomatidae Simon, 1872
Family Nipponopsalididae Martens, 1976
Family Trogulidae Sundevall, 1833
Suborder Laniatores Thorell, 1876 (about 4,200 species)
Infraorder Insidiatores Loman, 1900
Superfamily Travunioidea Absolon & Kratochvil, 1932
Family Cladonychiidae Hadži, 1935
Family Cryptomastridae Derkarabetian & Hedin, 2018
Family Paranonychidae Briggs, 1971
Family Travuniidae Absolon & Kratochvil, 1932
Superfamily Triaenonychoidea Sørensen, 1886
Family Buemarinoidae Karaman, 2019
Family Lomanellidae Mendes & Derkarabetian, 2021
Family Synthetonychiidae Forster, 1954
Family Triaenonychidae Sørensen, 1886
Infraorder Grassatores Kury, 2002
Superfamily Assamioidea Sørensen, 1884
Family Assamiidae Sørensen, 1884
Family Pyramidopidae Sharma and Giribet, 2011
Family Suthepiidae Martens, 2020
Family Trionyxellidae Roewer, 1912
Superfamily Epedanoidea Sørensen, 1886
Family Epedanidae Sørensen, 1886
Family Petrobunidae Sharma and Giribet, 2011
Family Podoctidae Roewer, 1912
Family Tithaeidae Sharma and Giribet, 2011
Superfamily Gonyleptoidea Sundevall, 1833
Family Agoristenidae Šilhavý, 1973
Family Ampycidae Kury, 2003
Family Askawachidae Kury & Carvalho, 2020
Family Cosmetidae Koch, 1839
Family Cranaidae Roewer, 1913
Family Cryptogeobiidae Kury, 2014
Family Gerdesiidae Bragagnolo, 2015
Family Gonyleptidae Sundevall, 1833
Family Manaosbiidae Roewer, 1943
Family Metasarcidae Kury, 1994
Family Nomoclastidae Roewer, 1943
Family Otilioleptidae Acosta, 2019
Family Prostygnidae Roewer, 1913
Family Stygnidae Simon, 1879
Family Stygnopsidae Sørensen, 1932
Superfamily Phalangodoidea Simon, 1879
Family Phalangodidae Simon, 1879
Superfamily Samooidea Sørensen, 1886
Family Biantidae Thorell, 1889
Family Samoidae Sørensen, 1886
Family Stygnommatidae Roewer, 1923
Superfamily Sandokanoidea Özdikmen & Kury, 2007
Family Sandokanidae Özdikmen & Kury, 2007
Superfamily Zalmoxoidea Sørensen, 1886
Family Escadabiidae Kury & Pérez, 2003
Family Fissiphalliidae Martens, 1988
Family Guasiniidae Gonzalez-Sponga, 1997
Family Icaleptidae Kury & Pérez, 2002
Family Kimulidae Pérez González, Kury & Alonso-Zarazaga, 2007
Family Zalmoxidae Sørensen, 1886
The family Stygophalangiidae (one species, Stygophalangium karamani) from underground waters in North Macedonia is sometimes misplaced in the Phalangioidea. It is not a harvestman.
Fossil record
Despite their long history, few harvestman fossils are known. This is mainly due to their delicate body structure and terrestrial habitat, making them unlikely to be found in sediments. As a consequence, most known fossils have been preserved within amber.
The oldest known harvestman, from the 410-million-year-old Devonian Rhynie chert, displayed almost all the characteristics of modern species, placing the origin of harvestmen in the Silurian, or even earlier. A recent molecular study of Opiliones, however, dated the origin of the order at about 473 million years ago (Mya), during the Ordovician.
No fossils of the Cyphophthalmi or Laniatores much older than 50 million years are known, despite the former presenting a basal clade, and the latter having probably diverged from the Dyspnoi more than 300 Mya.
Naturally, most finds are from comparatively recent times. More than 20 fossil species are known from the Cenozoic, three from the Mesozoic, and at least seven from the Paleozoic.
Paleozoic
The 410-million-year-old Eophalangium sheari is known from two specimens, one a female, the other a male. The female bears an ovipositor and is about long, whilst the male had a discernable penis. Whether both specimens belong to the same species is not definitely known. They have long legs, tracheae, and no median eyes. Together with the 305-million-year-old Hastocularis argus, it forms the suborder Tetrophthalmi, which was though to form the sister group to Cyphophthalmi. However, recent reanalysis of harvestman phylogeny has shown that E. sheari and H. argus are in fact members of the suborder Eupnoi, after it was discovered that living daddy-longlegs have the same arrangement of eyes as the fossils.
Brigantibunum listoni from East Kirkton near Edinburgh in Scotland is almost 340 million years old. Its placement is rather uncertain, apart from it being a harvestman.
From about 300 Mya, several finds are from the Coal Measures of North America and Europe. While the two described Nemastomoides species are currently grouped as Dyspnoi, they look more like Eupnoi.
Kustarachne tenuipes was shown in 2004 to be a harvestman, after residing for almost one hundred years in its own arachnid order, the "Kustarachnida".
Some fossils from the Permian are possibly harvestmen, but these are not well preserved.
Described species
Eophalangium sheari Dunlop, 2004 (Tetrophthalmi) — Early Devonian (Rhynie, Scotland)
Brigantibunum listoni Dunlop, 2005 (Eupnoi?) — Early Carboniferous (East Kirkton, Scotland)
Echinopustulus samuelnelsoni Dunlop, 2004 (Dyspnoi?) — Upper Carboniferous (Western Missouri, U.S.)
Eotrogulus fayoli Thevenin, 1901 (Dyspnoi: † Eotrogulidae) — Upper Carboniferous (Commentry, France)
Hastocularis argus Garwood, 2014 (Tetrophthalmi) — Upper Carboniferous (Montceau-les-Mines, France)
Kustarachne longipes (Petrunkevitch, 1913) (Eupnoi) — Upper Carboniferous (Mazon Creek, U.S.)
Kustarachne tenuipes Scudder, 1890 (Eupnoi) — Upper Carboniferous (Mazon Creek, U.S.)
Nemastomoides elaveris Thevenin, 1901 (Dyspnoi: † Nemastomoididae) — Upper Carboniferous (Commentary, France)
Nemastomoides longipes Petrunkevitch, 1913 (Dyspnoi: † Nemastomoididae) — Upper Carboniferous (Mazon Creek, U.S.)
Mesozoic
Burmalomanius circularis Bartel et al, 2023 (Podoctidae) — Myanmar, Burmese amber (Cenomanian)
Petroburma tarsomeria Bartel et al, 2023 (Petrobunidae) — Myanmar, Burmese amber (Cenomanian)
Mesodibunus tourinhoae Bartel et al, 2023 (Epedanidae) — Myanmar, Burmese amber (Cenomanian)
indet Bartel et al, 2023 (Insidiatores indet.) — Myanmar, Burmese amber (Cenomanian)
etc. Bartel et al, 2023 report "These new records bring the total number of Burmese amber laniatorean species to ten"
Halitherses grimaldii, a long-legged Dyspnoi with large eyes, was found in Burmese amber dating from approximately 100 Mya. It has been suggested that this may be related to the Ortholasmatinae (Nemastomatidae).
Currently, no fossil harvestmen are known from the Triassic. So far, they are also absent from the Lower Cretaceous Crato Formation of Brazil, a Lagerstätte that has yielded many other terrestrial arachnids. An unnamed long-legged harvestman was reported from the Early Cretaceous of Koonwarra, Victoria, Australia, which may be a Eupnoi.
Cenozoic
Unless otherwise noted, all species are from the Eocene.
Trogulus longipes Haupt, 1956 (Dyspnoi: Trogulidae) — Geiseltal, Germany
Philacarus hispaniolensis (Laniatores: Samoidae?) — Dominican amber
Kimula species (Laniatores: Kimulidae) — Dominican amber
Hummelinckiolus silhavyi Cokendolpher & Poinar, 1998 (Laniatores: Samoidae) — Dominican amber
Caddo dentipalpis (Eupnoi: Caddidae) — Baltic amber
Dicranopalpus ramiger (Koch & Berendt, 1854) (Eupnoi: Phalangiidae) — Baltic amber
Opilio ovalis (Eupnoi: Phalangiidae?) — Baltic amber
Cheiromachus coriaceus Menge, 1854 (Eupnoi: Phalangiidae?) — Baltic amber
Leiobunum longipes (Eupnoi: Sclerosomatidae) — Baltic amber
Histricostoma tuberculatum (Dyspnoi: Nemastomatidae) — Baltic amber
Mitostoma denticulatum (Dyspnoi: Nemastomatidae) — Baltic amber
Nemastoma incertum (Dyspnoi: Nemastomatidae) — Baltic amber
Sabacon claviger (Dyspnoi: Sabaconidae) — Baltic amber
Petrunkevitchiana oculata (Petrunkevitch, 1922) (Eupnoi: Phalangioidea) — Florissant Fossil Beds National Monument, USA (Oligocene)
Proholoscotolemon nemastomoides (Laniatores: Cladonychiidae) — Baltic amber
Siro platypedibus (Cyphophthalmi: Sironidae) — Bitterfeld amber
Amauropilio atavus (Cockerell, 1907) (Eupnoi: Sclerosomatidae) — Florissant, USA (Oligocene)
Amauropilio lacoei (A. lawei?) (Petrunkevitch, 1922) — Florissant, USA (Oligocene)
Pellobunus proavus Cokendolpher, 1987 (Laniatores: Samoidae) — Dominican amber
Phalangium species (Eupnoi: Phalangiidae) — near Rome, Italy (Quaternary)
Further internal navigation links
| Biology and health sciences | Arachnids | Animals |
192232 | https://en.wikipedia.org/wiki/Scaffolding | Scaffolding | Scaffolding, also called scaffold or staging, is a temporary structure used to support a work crew and materials to aid in the construction, maintenance and repair of buildings, bridges and all other human-made structures. Scaffolds are widely used on site to get access to heights and areas that would be otherwise hard to get to. Unsafe scaffolding has the potential to result in death or serious injury. Scaffolding is also used in adapted forms for formwork and shoring, grandstand seating, concert stages, access/viewing towers, exhibition stands, ski ramps, half pipes and art projects.
There are six main types of scaffolding used worldwide today. These are tube and coupler (fitting) components, prefabricated modular system scaffold components, H-frame / façade modular system scaffolds, suspended scaffolds, timber scaffolds and bamboo scaffolds (particularly in China, India and Hong Kong). Each type is made from several components which often include:
A base jack or plate which is a load-bearing base for the scaffold.
The standard, the upright component with connector joins.
The ledger, a horizontal brace.
The transom, a horizontal cross-section load-bearing component which holds the batten, board, or decking unit.
Brace diagonal and/or cross section bracing component.
Batten or board decking component used to make the working platform.
Coupler, a fitting used to join components together.
Scaffold tie, used to tie in the scaffold to structures.
Brackets, used to extend the width of working platforms.
Specialized components used to aid in their use as a temporary structure often include heavy duty load bearing transoms, ladders or stairway units for the ingress and egress of the scaffold, beams ladder/unit types used to span obstacles and rubbish chutes used to remove unwanted materials from the scaffold or construction project.
History
Stone Age
Sockets in the walls around the paleolithic cave paintings at Lascaux, suggest that a scaffold system was used for painting the ceiling, over 17,000 years ago.
Antiquity
The Berlin Foundry Cup depicts scaffolding in ancient Greece (early 5th century BC). Egyptians, Nubians and Chinese are also recorded as having used scaffolding-like structures to build tall buildings. Early scaffolding was made of wood and secured with rope knots.
Modern era
Scaffolding was erected by individual firms with wildly varying standards and sizes. The process was revolutionized by Daniel Palmer Jones and David Henry Jones. Modern day scaffolding standards, practices and processes can be attributed to these men and their companies: Rapid Scaffold Tie Company Ltd, Tubular Scaffolding Company and Scaffolding Great Britain Ltd (SGB).
David Henry Jones and Daniel Palmer Jones patented the "Scaffixer" in either 1907 or 1910, a coupling device far more robust than rope which revolutionized scaffolding construction. In 1913, his company was commissioned for the reconstruction of Buckingham Palace, during which his Scaffixer gained much publicity. Palmer-Jones followed this up with the improved "Universal Coupler" in 1919 - this soon became the industry standard coupling and has remained so to this day.
Advancements in metallurgy throughout the early 20th century saw the introduction of tubular steel water pipes (instead of timber poles) with standardized dimensions, allowing for the industrial interchangeability of parts and improving the structural stability of the scaffold. The use of diagonal bracings also helped to improve stability, especially on tall buildings. The first frame system was brought to market by SGB in 1944 and was used extensively for the postwar reconstruction.
Today
The European Standard, BS EN 12811-1, specifies performance requirements and methods of structural and general design for access and working scaffolds. Requirements given are for scaffold structures that rely on the adjacent structures for stability. In general these requirements also apply to other types of working scaffolds.
The purpose of a working scaffold is to provide a safe working platform and access suitable for work crews to carry out their work. The European Standard sets out performance requirements for working scaffolds. These are substantially independent of the materials of which the scaffold is made. The standard is intended to be used as the basis for enquiry and design.
Materials
The basic components of scaffolding are tubes, couplers and boards.
The basic lightweight tube scaffolding that became the standard and revolutionised scaffolding, becoming the baseline for decades, was invented and marketed in the mid-1950s. With one basic 24 pound unit a scaffold of various sizes and heights could be assembled easily by a couple of labourers without the nuts or bolts previously needed.
Tubes are usually made either of steel or aluminium. Composite scaffolding uses filament-wound tubes of glass fibre in a nylon or polyester matrix. Because of the high cost of composite tube, it is usually only used when there is a risk from overhead electric cables that cannot be isolated. Steel tubes are either 'black' or galvanised. The tubes come in a variety of lengths and a standard outside diameter of 48.3 mm. (1.5 NPS pipe). The chief difference between the two types of metal tubes is the lower weight of aluminium tubes (1.7 kg/m as opposed to 4.4 kg/m). Aluminium tube is more flexible and has a lower resistance to stress. Tubes are generally bought in 6.3 m lengths and can then be cut down to certain typical sizes. Most large companies will brand their tubes with their name and address in order to deter theft.
Boards provide a working surface for scaffold users. They are seasoned wood and come in three thicknesses (38 mm (usual), 50 mm and 63 mm) are a standard width (225 mm) and are a maximum of 3.9 m long. The board ends are protected either by metal plates called hoop irons or sometimes nail plates, which often have the company name stamped into them. Timber scaffold boards in the UK should comply with the requirements of BS 2482. As well as timber, steel or aluminium decking is used, as well as laminate boards. In addition to the boards for the working platform, there are sole boards which are placed beneath the scaffolding if the surface is soft or otherwise suspect, although ordinary boards can also be used. Another solution, called a scaffpad, is made from a rubber base with a base plate moulded inside; these are desirable for use on uneven ground since they adapt, whereas sole boards may split and have to be replaced.
Couplers are the fittings which hold the tubes together. The most common are called scaffold couplers, and there are three basic types: right-angle couplers, putlog couplers and swivel couplers. To join tubes end-to-end joint pins (also called spigots) or sleeve couplers are used. Only right angle couplers and swivel couplers can be used to fix tube in a 'load-bearing connection'. Single couplers are not load-bearing couplers and have no design capacity.
Other common scaffolding components include base plates, ladders, ropes, anchor ties, reveal ties, gin wheels, sheeting, etc.
Most companies will adopt a specific colour to paint the scaffolding with, in order that quick visual identification can be made in case of theft. All components that are made from metal can be painted but items that are wooden should never be painted as this could hide defects.
Despite the metric measurements given, many scaffolders measure tubes and boards in imperial units, with tubes from 21 feet down and boards from 13 ft down.
Bamboo scaffolding is widely used in Hong Kong and Macau, with nylon straps tied into knots as couplers. In India, bamboo or other wooden scaffolding is also mostly used, with poles being lashed together using ropes made from coconut hair (coir).
Basic scaffolding
The key elements of the scaffolding are the standard, ledger and transoms. The standards, also called uprights, are the vertical tubes that transfer the entire weight of the structure to the ground where they rest on a square base plate to spread the load. The base plate has a shank in its centre to hold the tube and is sometimes pinned to a sole board. Ledgers are horizontal tubes which connect between the standards. Transoms rest upon the ledgers at right angles. Main transoms are placed next to the standards, they hold the standards in place and provide support for boards; intermediate transoms are those placed between the main transoms to provide extra support for boards. In Canada this style is referred to as "English". "American" has the transoms attached to the standards and is used less but has certain advantages in some situations.
As well as the tubes at right angles there are cross braces to increase rigidity, these are placed diagonally from ledger to ledger, next to the standards to which they are fitted. If the braces are fitted to the ledgers they are called ledger braces. To limit sway a facade brace is fitted to the face of the scaffold every 30 metres or so at an angle of 35°-55° running right from the base to the top of the scaffold and fixed at every level.
Of the couplers previously mentioned, right-angle couplers join ledgers or transoms to standards, putlog or single couplers join board bearing transoms to ledgers - Non-board bearing transoms should be fixed using a right-angle coupler. Swivel couplers are to connect tubes at any other angle. The actual joints are staggered to avoid occurring at the same level in neighbouring standards.
The spacings of the basic elements in the scaffold are fairly standard. For a general purpose scaffold the maximum bay length is 2.1 m, for heavier work the bay size is reduced to 2 or even 1.8 m while for inspection a bay width of up to 2.7 m is allowed.
The scaffolding width is determined by the width of the boards, the minimum width allowed is 600 mm but a more typical four-board scaffold would be 870 mm wide from standard to standard. More heavy-duty scaffolding can require 5, 6 or even up to 8 boards width. Often an inside board is added to reduce the gap between the inner standard and the structure.
The lift height, the spacing between ledgers, is 2 m, although the base lift can be up to 2.7 m. The diagram above also shows a kicker lift, which is just 150 mm or so above the ground.
Transom spacing is determined by the thickness of the boards supported, 38 mm boards require a transom spacing of no more than 1.2 m while a 50 mm board can stand a transom spacing of 2.6 m and 63 mm boards can have a maximum span of 3.25 m. The minimum overhang for all boards is 50 mm and the maximum overhang is no more than 4x the thickness of the board.
Foundations
Good foundations are essential. Often scaffold frameworks will require more than simple base plates to safely carry and spread the load. Scaffolding can be used without base plates on concrete or similar hard surfaces, although base plates are always recommended. For surfaces like pavements or tarmac base plates are necessary. For softer or more doubtful surfaces sole boards must be used, beneath a single standard a sole board should be at least with no dimension less than , the thickness must be at least . For heavier duty scaffold much more substantial baulks set in concrete can be required. On uneven ground steps must be cut for the base plates, a minimum step size of around is recommended. A working platform requires certain other elements to be safe. They must be close-boarded, have double guard rails and toe and stop boards. Safe and secure access must also be provided.
Ties
Scaffolds are only rarely independent structures. To provide stability for a scaffolding (at left) framework ties are generally fixed to the adjacent building/fabric/steelwork.
General practice is to attach a tie every 4 m on alternate lifts (traditional scaffolding). Prefabricated System scaffolds require structural connections at all frames - i.e. 2–3 m centres (tie patterns must be provided by the System manufacturer/supplier). The ties are coupled to the scaffold as close to the junction of standard and ledger (node point) as possible. Due to recent regulation changes, scaffolding ties must support +/- loads (tie/butt loads) and lateral (shear) loads.
Due to the different nature of structures there is a variety of different ties to take advantage of the opportunities.
Through ties are put through structure openings such as windows. A vertical inside tube crossing the opening is attached to the scaffold by a transom and a crossing horizontal tube on the outside called a bridle tube. The gaps between the tubes and the structure surfaces are packed or wedged with timber sections to ensure a solid fit.
Box ties are used to attach the scaffold to suitable pillars or comparable features. Two additional transoms are put across from the lift on each side of the feature and are joined on both sides with shorter tubes called tie tubes. When a complete box tie is impossible a l-shaped lip tie can be used to hook the scaffold to the structure, to limit inward movement an additional transom, a butt transom, is placed hard against the outside face of the structure.
Sometimes it is possible to use anchor ties (also called bolt ties), these are ties fitted into holes drilled in the structure. A common type is a ring bolt with an expanding wedge which is then tied to a node point.
The least 'invasive' tie is a reveal tie. These use an opening in the structure but use a tube wedged horizontally in the opening. The reveal tube is usually held in place by a reveal screw pin (an adjustable threaded bar) and protective packing at either end. A transom tie tube links the reveal tube to the scaffold. Reveal ties are not well regarded, they rely solely on friction and need regular checking so it is not recommended that more than half of all ties be reveal ties.
If it is not possible to use a safe number of ties rakers can be used. These are single tubes attached to a ledger extending out from the scaffold at an angle of less than 75° and securely founded. A transom at the base then completes a triangle back to the base of the main scaffold.
Bamboo scaffolding
Bamboo scaffolding is a type of scaffolding made from bamboo and widely used in construction work for centuries. Many famous landmarks, notably The Great Wall of China, were built using bamboo scaffolding, and its use continues today in some parts of the world.
History
Bamboo scaffolding was first introduced into the building industry in Hong Kong immediately after colonization in the 1800s. It was widely used in the building of houses and multi-story buildings (up to four stories high) prior to the development of metal scaffolding. It was also useful for short-term construction projects, such as framework for temporary sheds for Cantonese Opera performances.
There are three types of scaffolding in Hong Kong:
Double-row scaffold;
Extended Bamboo scaffolding;
Shop signs of Bamboo Scaffolding.
Gradual decline
In 2013, there were 1,751 registered bamboo scaffolders and roughly 200 scaffolding companies in Hong Kong. The use of bamboo scaffolding is diminishing due to shortages in labor and material. Despite the lack of labor force and material, recently safety issues have become another serious concern.
The labor shortage may be due to the reluctance of younger generations to become scaffolders. "They even think that it’s a dirty and dangerous job. They are not going to do that kind of work," said Yu Hang Flord, who has been a scaffolder for 30 years and later became the director of Wui Fai Holdings, a member of the Hong Kong and Kowloon Scaffolders General Merchants Association. "They refuse to step in, although we give them high pay. They are scared of it. Young generations do not like jobs that involve hard work." Another reason fewer people are becoming scaffolders is that new recruits need to undergo training with the Hong Kong Construction Industry Council in order to acquire a license. Older scaffolders generally learned in apprenticeships, and may have been able to gather more hands-on experience.
Material shortages are also a contributing factor to the decline. The bamboo scaffolding material was imported from mainland China. Bamboo—which matures after three years to the wide diameter and thick skin perfect for scaffolding—came from the Shaoxing area in Guangdong. Over the past two decades, firms have had to look to Guangxi instead. The industry's fear is that one day supplies will be blocked due to export embargoes and environmental concerns. Attempts to import bamboo from Thailand, or switch to synthetic or plastic bamboo, have so far proved unsuccessful.
In many African countries, notably Nigeria, bamboo scaffolding is still used for small scale construction in urban areas. In rural areas, the use of bamboo scaffolding for construction is common. In fact, bamboo is an essential building and construction commodity in Nigeria; the bamboo materials are transported on heavy trucks and trailers from rural areas (especially the tropical rain forest) to cities and the northern part of Nigeria.
Some of the structures in relaxation and recreation centres, both in urban and rural areas of Nigeria, are put in place using bamboo materials. This is not for reasons of poverty (especially in the cities) but to add more aesthetics to these centres. Bamboo materials are still used in the construction of some bukas (local restaurants) in rural areas.
Specifications
Forms of bamboo scaffolding include:
Double-row Scaffold
Only double-row bamboo scaffold is allowed to be used for working at height.
Nylon Mesh
The perimeter of bamboo scaffold should be covered by nylon mesh against falling objects. The lapping of nylon mesh should be at least 100 mm wide.
Access and Egress
Suitable means of access should be provided from the building or ground level to the scaffold such as gangway, stairs and ladder etc.
Catch Fan
Sloping catch fans shall be erected at a level close to the first floor and at no more than 15 metres, vertical intervals should give a minimum horizontal protection coverage of 1500 mm. Large catch fans should be erected at specific locations to protect the public and/or workers underneath.
Platform of Catch Fan or Receptacle
A suitable receptacle, covered with galvanized zinc sheet, should be provided within each catch-fan to trap falling objects.
Steel Bracket
Steel brackets shall be provided for supporting the standard of scaffold at about six floor intervals. The horizontal distance between steel brackets is about 3 metres.
Putlogs
Mild steel bars or similar materials are required to tie any structure to maintain the bamboo scaffold in its position on every floor. The distance of adjacent putlogs is about 3 to 4 metres.
Working Platform
Every working platform must be at least 400 mm wide and closely boarded by planks. The edges of working platforms should be protected by no less than 2 horizontal bamboo members of the scaffold, at intervals between 750 mm to 900 mm and suitable toe-boards no less than 200 mm high.
Special Scaffold
All scaffolds with a height excess of 15 metres shall be designed by an Engineer.
Competent Examiner
They should complete a formal training in bamboo scaffolding work or hold a trade test certificate on bamboo scaffolding and have at least 10 years of relevant experience.
Trained Worker
They should complete formal training in bamboo scaffolding work or hold a trade test certificate on bamboo scaffolding and have at least 3 years of relevant experience.
Uses in construction
Bamboo scaffolding is a temporary structure to support people and materials when constructing or repairing building exteriors and interiors. In bamboo scaffolding, plastic fibre straps and bamboo shoots are bound together to form a solid and secure scaffold structure without screws. Bamboo scaffolding does not need to have a foundation on the ground, as long as the scaffolding has a fulcrum for structural support.
Bamboo scaffolding is mostly seen in developing Asian countries such as India, Bangladesh, Sri Lanka, and Indonesia.
Cultural use
Chinese opera theatres
Chinese Opera is one of the world's "Intangible Cultural Heritages". One of bamboo scaffolding's main alternative uses is in drama theatres. The flexibility and convenience of this type of scaffolding suits stages set up for temporary use and also separates the audience from the performers.
Respecting and promoting the traditional cultures of Chinese Opera, a huge event called the West Kowloon Bamboo Theatre has been held at the West Kowloon Waterfront Promenade annually since 2012.
Yu Lan Ghost Festival
Stages are built from bamboo scaffolding for the live Chinese operas and Chiu Chow–style dramas performed during every Yu Lan Ghost Festival to worship ghostly ancestors.
Cheung Chau Bun Festival
The bamboo tower used in the famous Bun Scrambling Competition during the Cheung Chau Bun Festival on the island of Cheung Chau is constructed out of bamboo scaffolding. Nine thousand buns, representing fortune and blessing, are supported on the fourteen-meter tall bamboo tower in front of the Pak Tai Temple. For the Piu Sik Parade, bamboo stands and racks are used to hold the young costumed performers above the crowds.
Specialty scaffolding
Types of scaffolding covered by the Occupational Health and Safety Administration in the United States include the following categories: Pole; tube and coupler; fabricated frame (tubular welded frame scaffolds); plasterers’, decorators’, and large area scaffolds; bricklayers' (pipe); horse; form scaffolds and carpenters’
bracket scaffolds; roof brackets; outrigger; pump jacks; ladder jacks; window jacks; crawlingboards (chicken ladders); step, platform, and trestle ladder
scaffolds; single-point adjustable suspension; two-point adjustable suspension (swing stages); multipoint adjustable suspension; stonesetters’ multipoint adjustable suspension scaffolds, and masons’ multipoint adjustable suspension scaffolds; catenary; float (ship); interior hung; needle beam; multilevel suspended; mobile; repair bracket scaffolds; and stilts.
Gallery of scaffold types
Putlog scaffold
In addition to the Jody couplers (discussed above), there are also Jody tubes. These have a flattened end or have been fitted with a blade. This feature allows the end of the tube to be inserted into or rest upon the brickwork of the structure.
A Jody scaffold may also be called a bricklayer's scaffold. As such, the scaffold consists only of a single row of standards with a single ledger. The Jody's are transoms - attached to the ledger at one end but integrated into the bricks at the other.
Spacing is the same on a Jody scaffold as on a general purpose scaffold, and ties are still required.
In recent years a number of new innovations have meant an increased scope of use for scaffolding, such as ladderbeams for spanning spaces that cannot accommodate standards and the increased use of sheeting and structure to create temporary roofs.
Jody tubes can also be used vertically when drove under downward pressure into the ground, most typically in greens and fields, where approx 1/4 of the putlog tube remains exposed above ground. The purpose for this alternative method is to create a good anchoring point for additional vertical scaffolding to clamp on to, most commonly used in live events and festivals with scaffolding poles up to 21 feet high where festoon lighting, cabling and bunting can be hung from safely.
Pump-jack
A pump-jack is a type of portable scaffolding system. The scaffold rests on supports attached to two or more vertical posts. The user raises the scaffolding by pumping the foot pedals on the supports, like an automobile jack.
Baker staging
Baker staging is a metal scaffold which is easy to assemble. Rolling platforms typically wide by long and tall sections which can be stacked up to three high with the use of added outriggers. The work platform height is adjustable.
X-Deck ladder scaffolding
Low level scaffolding that is height adjustable. It is a hybrid ladder scaffold work platform.
Standards
The widespread use of scaffolding systems, along with the profound importance that they earned in modern applications such as civil engineering projects and temporary structures, led to the definition of a series of standards covering a vast number of specific issues involving scaffolding. Among the standards there are:
DIN 4420, a DIN standard divided in 5 parts which covers the design and detail of scaffolds, ladder scaffolds, safety requirements and standard types, materials, components, dimensions and loadbearing capacity.
DIN 4421, a DIN standard which covers the analysis, design and construction of falsework
29 CFR Part 1926: Safety Standards for Scaffolds Used in the Construction Industry from the U.S. Occupational Safety and Health Administration (OSHA), with an accompanying "construction eTool"
| Technology | Building materials | null |
192233 | https://en.wikipedia.org/wiki/Palpigradi | Palpigradi | Palpigradi is an order of very small arachnids commonly known as microwhip scorpion or palpigrades.
Description
Palpigrades belong to the arachnid class. They are the sister group to Solifugae, no more than in length, and averaging . They have a thin, pale, segmented integument, and a segmented abdomen that terminates in a whip-like flagellum. This is made up of 15 segment-like parts, or "articles", and may make up as much as half the animal's length. Each article of the flagellum bears bristles, giving the whole flagellum the appearance of a bottle brush. The carapace is divided into two plates between the third and fourth leg pair of legs. They have no eyes.
As in some other arachnids, the first pair of legs is modified to serve as sensory organs, and are held clear of the ground while walking. Often, however, palpigrades use their pedipalps for locomotion, so that the animal appears to be walking on five pairs of legs. But they do not swing in phase with the walking legs, and are mostly used as legs in rough terrain. Both the nine-segmented pedipalps and the four pairs of legs end in three claws each. The first pair of legs are 11-segmented, the second and third pairs seven-segmented and the fourth pair eight-segmented.
The family Prokoeneniidae have three pairs of lung-sacs on the fourth, fifth and sixth abdominal segments, although these are not true book lungs as there is no trace of the characteristic leaflike lamellae which defines book lungs. Family Eukoeneniidae have no respiratory organs at all and breathe directly through the cuticle.
Their exoskeleton is very weakly sclerotized compared to other arachnids, which is the reason why fossils are so rare, and go no further back than 99 million years ago in Burmese Amber.
Ecology and behavior
Species of Palpigradi live interstitially in wet tropical and subtropical soils. A few species have been found in shallow coral sands and on tropical beaches. In Europe, they have been found in caves and underground spaces. There is one endemic species on the island of Malta, in the Mediterranean Sea, which exists only in one specific cave. They need a damp environment to survive, and they always hide from light, so they are commonly found in the moist earth under buried stones and rocks. They can be found on every continent, except in Arctic and Antarctic regions. Terrestrial Palpigradi have hydrophobic cuticles, but littoral (beach-dwelling) species are able to pass through the water surface easily.
Very little is known about palpigrade behavior. They are generally believed to be predators like their larger relatives, feeding on minuscule animals in their habitat. However, their chelicerae have been described as "more like a comb or brush than the forceps of a predator", and the species Eukoenenia spelaea has been shown to feed on cyanobacteria ("blue-green algae"). Their mating habits are unknown, except that they lay only a few relatively large eggs at a time.
Classification
Palpigradi is split into two families, differentiated by the presence of ventral sacs on sternites IV–VI in Prokoeneniidae, and their absence in Eukoeneniidae.
Two fossil palpigrade species have been described. The first one is from the Onyx Marble of Arizona, which is probably of Pliocene age. Its familial position is uncertain. The second one (Electrokoenenia yaksha), belonging to the family Eukoeneniidae, is known from Cretaceous (Cenomanian) Burmese amber from northern Myanmar. Older publications refer to a fossil palpigrade (or palpigrade-like animal) from the Jurassic of the Solnhofen limestone in Germany, but this has now been shown to be a misidentified fossil insect.
Genera
, the World Palpigradi Catalog accepts the following eight genera:
Allokoenenia Silvestri, 1913
Eukoenenia Börner, 1901
Koeneniodes Silvestri, 1913
Leptokoenenia Condé, 1965
Prokoenenia Börner, 1901
Triadokoenenia Condé, 1991
†Electrokoenenia Engel & Huang, 2016
†Paleokoenenia Rowland & Sissom, 1980
| Biology and health sciences | Arachnids | Animals |
192282 | https://en.wikipedia.org/wiki/Uropygi | Uropygi | Uropygi is an arachnid order comprising invertebrates commonly known as whip scorpions or vinegaroons (also spelled vinegarroons and vinegarones). They are often called uropygids. The name "whip scorpion" refers to their resemblance to true scorpions and possession of a whiplike tail, and "vinegaroon" refers to their ability when attacked to discharge an offensive, vinegar-smelling liquid, which contains acetic acid. The order may also be called Thelyphonida. Both names, Uropygi and Thelyphonida, may be used either in a narrow sense for the order of whip scorpions, or in a broad sense which includes the order Schizomida.
Taxonomy
Carl Linnaeus first described a whip scorpion in 1758, although he did not distinguish it from what are now regarded as different kinds of arachnid, calling it Phalangium caudatum. Phalangium is now used as a name for a genus of harvestmen (Opiliones). In 1802, Pierre André Latreille was the first to use a genus name solely for whip scorpions, namely Thelyphonus. Latreille later explained the name as meaning "", meaning "who kills".
One name for the order, Thelyphonida, is based on Latreille's genus name. It was first used, as the French , by Latreille in 1804, and later by Octavius Pickard-Cambridge in 1872 (with the spelling Thelyphonidea).
The alternative name, Uropygi, was first used by Tamerlan Thorell in 1883. It means "tail rump", from Ancient Greek (), from () "tail" and () "rump" referring to the whip-like flagellum on the end of the pygidium, a small plate made up of the last three segments of the abdominal exoskeleton.
The classification and scientific name used for whip scorpions varies. Originally, Amblypygi (whip spiders), Uropygi and Schizomida (short-tailed whipscorpions) formed a single order of arachnids, Pedipalpi. Pedipalpi was later divided into two orders, Amblypygi and Uropygi (or Uropygida). Schizomida was then split off from Uropygi into a separate order. The remainder has either continued to be called by the same name, Uropygi, possibly distinguished as Uropygi sensu stricto, or called Thelyphonida. When the name Uropygi is used for the whip scorpions, the clade containing Uropygi and Schizomida may be called Thelyphonida, or Thelyphonida s.l. Conversely, when the name Thelyphonida is used for the whip scorpions alone, the parent clade may be called Uropygi, or Uropygi sensu lato. The table below summarizes the two usages. When the qualifications s.l. and s.s. are omitted, the names Uropygi and Thelyphonida are ambiguous.
Phylogenetic studies show the three groups, Amblypygi, Uropygi s.s. and Schizomida, to be closely related. The Uropygi s.s. and Schizomida likely diverged in the late Carboniferous, somewhere in the tropics of Pangaea.
Description
Whip scorpions range from in length, with most species having a body no longer than ; the largest species, of the genus Mastigoproctus, can reach . An extinct Mesoproctus from the Lower Cretaceous Crato Formation could be the same size. Because of their legs, claws, and "whip", though, they can appear much larger, and the heaviest specimen weighed was 12.4 grams (0.44 oz).
The opisthosoma consists of 12 segments. The first segment forms a pedicel, and each of the next eight segments has dorsal tergites. The last three segments are fused into closed rings that ends with the flagellum, made up of 30-40 units.
Like the related orders Schizomida and Amblypygi, whip scorpions use only six legs for walking, with the first two legs serving as antennae-like sensory organs. All species also have very large scorpion-like pedipalps (pincers) but there is an additional large spine on each palpal tibia. They have one pair of median eyes at the front of the cephalothorax and up till five pairs of lateral eyes on each side of the head, a pattern also found in scorpions. Whip scorpions have no venom glands, but they have glands near the rear of their abdomen that can spray a combination of acetic acid and caprylic acid when they are bothered. The acetic acid gives this spray a vinegar-like smell, giving rise to the common name vinegaroon.
Behaviour
Whip scorpions are carnivorous, nocturnal hunters feeding mostly on insects, millipedes, scorpions, and terrestrial isopods, but sometimes on worms and slugs. Mastigoproctus sometimes preys on small vertebrates. The prey is crushed between special teeth on the inside of the trochanters (the second segment of the "legs") of the front appendages. They are valuable in controlling the population of cockroaches and crickets.
Males secrete a spermatophore (a united mass of sperm), which is transferred to the female following courtship behaviour, in which the male holds the ends of the female's first legs in his chelicerae. The spermatophore is deposited on the ground and picked up by the female using her genital area. In some genera, the male then uses his pedipalps to push the spermatophore into her body.
After a few months, the female will dig a large burrow and seal herself inside. Up to 40 eggs are extruded, within a membranous broodsac that preserves moisture and remains attached to the genital operculum and the fifth segment of the mother's ventral opisthosoma. The female refuses to eat and holds her opisthosoma in an upward arch so that the broodsac does not touch the ground for the next few months, as the eggs develop into postembryos. Appendages become visible.
The white young that hatch from the postembryos climb onto their mother's back and attach themselves there with special suckers. After the first moult, when they look like miniature adults but with bright red palps, they leave the burrow. The mother may live up to two more years. The young grow slowly, going through four moults in about four years before reaching adulthood. They live for up to another four years.
Distribution and habitat
Whip scorpions are found in tropical and subtropical areas, excluding Europe and Australia. Also, only a single species is known from Africa: Etienneus africanus, probably a Gondwana relict, endemic to Senegal, the Gambia and Guinea-Bissau.
They usually dig burrows with their pedipalps, to which they transport their prey. They may also burrow under logs, rotting wood, rocks, and other natural debris. They prefer humid, dark places and avoid light. Mastigoproctus giganteus, the giant whip scorpion, is found in more arid areas, including Arizona and New Mexico.
Subtaxa
As of 2023, the World Uropygi Catalog accepted the following 16 extant genera, all placed in the family Thelyphonidae:
Etienneus Heurtault, 1984
Ginosigma Speijer, 1936
Glyptogluteus Rowland, 1973
Hypoctonus Thorell, 1888
Labochirus Pocock, 1894
Mastigoproctus Pocock, 1894
Mayacentrum Viquez & Armas, 2006
Mimoscorpius Pocock, 1894
Ravilops Víquez & Armas, 2005
Sheylayongium Teruel, 2018
Thelyphonellus Pocock, 1894
Thelyphonoides Krehenwinkel, Curio, Tacud & Haupt, 2009
Thelyphonus Latreille, 1802
Typopeltis Pocock, 1894
Uroproctus Pocock, 1894
Valeriophonus Viquez & Armas, 2005
In addition, seven extinct genera were accepted, two within the family Thelyphonidae:
†Mesoproctus Dunlop, 1998
†Mesothelyphonus Cai & Huang, 2017
and five unplaced as to family:
†Burmathelyphonia Wunderlich, 2015
†Geralinura Scudder, 1884
†Inmontibusichnus Knecht, Benner, Dunlop & Renczkowski, 2023
†Parageralinura Tetlie & Dunlop, 2008
†Parilisthelyphonus Knecht, Benner, Dunlop & Renczkowski, 2023
†Proschizomus Dunlop & Horrocks, 1996
†Prothelyphonus Frič, 1904
| Biology and health sciences | Arachnids | Animals |
192316 | https://en.wikipedia.org/wiki/Virtual%20particle | Virtual particle | A virtual particle is a theoretical transient particle that exhibits some of the characteristics of an ordinary particle, while having its existence limited by the uncertainty principle, which allows the virtual particles to spontaneously emerge from vacuum at short time and space ranges. The concept of virtual particles arises in the perturbation theory of quantum field theory (QFT) where interactions between ordinary particles are described in terms of exchanges of virtual particles. A process involving virtual particles can be described by a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines.
Virtual particles do not necessarily carry the same mass as the corresponding ordinary particle, although they always conserve energy and momentum. The closer its characteristics come to those of ordinary particles, the longer the virtual particle exists. They are important in the physics of many processes, including particle scattering and Casimir forces. In quantum field theory, forces—such as the electromagnetic repulsion or attraction between two charges—can be thought of as resulting from the exchange of virtual photons between the charges. Virtual photons are the exchange particles for the electromagnetic interaction.
The term is somewhat loose and vaguely defined, in that it refers to the view that the world is made up of "real particles". "Real particles" are better understood to be excitations of the underlying quantum fields. Virtual particles are also excitations of the underlying fields, but are "temporary" in the sense that they appear in calculations of interactions, but never as asymptotic states or indices to the scattering matrix. The accuracy and use of virtual particles in calculations is firmly established, but as they cannot be detected in experiments, deciding how to precisely describe them is a topic of debate. Although widely used, they are by no means a necessary feature of QFT, but rather are mathematical conveniences — as demonstrated by lattice field theory, which avoids using the concept altogether.
Properties
The concept of virtual particles arises in the perturbation theory of quantum field theory, an approximation scheme in which interactions (in essence, forces) between actual particles are calculated in terms of exchanges of virtual particles. Such calculations are often performed using schematic representations known as Feynman diagrams, in which virtual particles appear as internal lines. By expressing the interaction in terms of the exchange of a virtual particle with four-momentum , where is given by the difference between the four-momenta of the particles entering and leaving the interaction vertex, both momentum and energy are conserved at the interaction vertices of the Feynman diagram.
A virtual particle does not precisely obey the energy–momentum relation . Its kinetic energy may not have the usual relationship to velocity. It can be negative. This is expressed by the phrase off mass shell. The probability amplitude for a virtual particle to exist tends to be canceled out by destructive interference over longer distances and times. As a consequence, a real photon is massless and thus has only two polarization states, whereas a virtual one, being effectively massive, has three polarization states.
Quantum tunnelling may be considered a manifestation of virtual particle exchanges. The range of forces carried by virtual particles is limited by the uncertainty principle, which regards energy and time as conjugate variables; thus, virtual particles of larger mass have more limited range.
Written in the usual mathematical notations, in the equations of physics, there is no mark of the distinction between virtual and actual particles. The amplitudes of processes with a virtual particle interfere with the amplitudes of processes without it, whereas for an actual particle the cases of existence and non-existence cease to be coherent with each other and do not interfere any more. In the quantum field theory view, actual particles are viewed as being detectable excitations of underlying quantum fields. Virtual particles are also viewed as excitations of the underlying fields, but appear only as forces, not as detectable particles. They are "temporary" in the sense that they appear in some calculations, but are not detected as single particles. Thus, in mathematical terms, they never appear as indices to the scattering matrix, which is to say, they never appear as the observable inputs and outputs of the physical process being modelled.
There are two principal ways in which the notion of virtual particles appears in modern physics. They appear as intermediate terms in Feynman diagrams; that is, as terms in a perturbative calculation. They also appear as an infinite set of states to be summed or integrated over in the calculation of a semi-non-perturbative effect. In the latter case, it is sometimes said that virtual particles contribute to a mechanism that mediates the effect, or that the effect occurs through the virtual particles.
Manifestations
There are many observable physical phenomena that arise in interactions involving virtual particles. For bosonic particles that exhibit rest mass when they are free and actual, virtual interactions are characterized by the relatively short range of the force interaction produced by particle exchange. Confinement can lead to a short range, too. Examples of such short-range interactions are the strong and weak forces, and their associated field bosons.
For the gravitational and electromagnetic forces, the zero rest-mass of the associated boson particle permits long-range forces to be mediated by virtual particles. However, in the case of photons, power and information transfer by virtual particles is a relatively short-range phenomenon (existing only within a few wavelengths of the field-disturbance, which carries information or transferred power), as for example seen in the characteristically short range of inductive and capacitative effects in the near field zone of coils and antennas.
Some field interactions which may be seen in terms of virtual particles are:
The Coulomb force (static electric force) between electric charges. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse square law for electric force. Since the photon has no mass, the coulomb potential has an infinite range.
The magnetic field between magnetic dipoles. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space, this exchange results in the inverse cube law for magnetic force. Since the photon has no mass, the magnetic potential has an infinite range. Even though the range is infinite, the time lapse allowed for a virtual photon existence is not infinite.
Electromagnetic induction. This phenomenon transfers energy to and from a magnetic coil via a changing (electro)magnetic field.
The strong nuclear force between quarks is the result of interaction of virtual gluons. The residual of this force outside of quark triplets (neutron and proton) holds neutrons and protons together in nuclei, and is due to virtual mesons such as the pi meson and rho meson.
The weak nuclear force is the result of exchange by virtual W and Z bosons.
The spontaneous emission of a photon during the decay of an excited atom or excited nucleus; such a decay is prohibited by ordinary quantum mechanics and requires the quantization of the electromagnetic field for its explanation.
The Casimir effect, where the ground state of the quantized electromagnetic field causes attraction between a pair of electrically neutral metal plates.
The van der Waals force, which is partly due to the Casimir effect between two atoms.
Vacuum polarization, which involves pair production or the decay of the vacuum, which is the spontaneous production of particle-antiparticle pairs (such as electron-positron).
Lamb shift of positions of atomic levels.
The impedance of free space, which defines the ratio between the electric field strength and the magnetic field strength : .
Much of the so-called near-field of radio antennas, where the magnetic and electric effects of the changing current in the antenna wire and the charge effects of the wire's capacitive charge may be (and usually are) important contributors to the total EM field close to the source, but both of which effects are dipole effects that decay with increasing distance from the antenna much more quickly than do the influence of "conventional" electromagnetic waves that are "far" from the source. These far-field waves, for which is (in the limit of long distance) equal to , are composed of actual photons. Actual and virtual photons are mixed near an antenna, with the virtual photons responsible only for the "extra" magnetic-inductive and transient electric-dipole effects, which cause any imbalance between and . As distance from the antenna grows, the near-field effects (as dipole fields) die out more quickly, and only the "radiative" effects that are due to actual photons remain as important effects. Although virtual effects extend to infinity, they drop off in field strength as rather than the field of EM waves composed of actual photons, which drop as .
Most of these have analogous effects in solid-state physics; indeed, one can often gain a better intuitive understanding by examining these cases. In semiconductors, the roles of electrons, positrons and photons in field theory are replaced by electrons in the conduction band, holes in the valence band, and phonons or vibrations of the crystal lattice. A virtual particle is in a virtual state where the probability amplitude is not conserved. Examples of macroscopic virtual phonons, photons, and electrons in the case of the tunneling process were presented by Günter Nimtz and Alfons A. Stahlhofen.
Feynman diagrams
The calculation of scattering amplitudes in theoretical particle physics requires the use of some rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented as Feynman diagrams. The appeal of the Feynman diagrams is strong, as it allows for a simple visual presentation of what would otherwise be a rather arcane and abstract formula. In particular, part of the appeal is that the outgoing legs of a Feynman diagram can be associated with actual, on-shell particles. Thus, it is natural to associate the other lines in the diagram with particles as well, called the "virtual particles". In mathematical terms, they correspond to the propagators appearing in the diagram.
In the adjacent image, the solid lines correspond to actual particles (of momentum p1 and so on), while the dotted line corresponds to a virtual particle carrying momentum k. For example, if the solid lines were to correspond to electrons interacting by means of the electromagnetic interaction, the dotted line would correspond to the exchange of a virtual photon. In the case of interacting nucleons, the dotted line would be a virtual pion. In the case of quarks interacting by means of the strong force, the dotted line would be a virtual gluon, and so on.
Virtual particles may be mesons or vector bosons, as in the example above; they may also be fermions. However, in order to preserve quantum numbers, most simple diagrams involving fermion exchange are prohibited. The image to the right shows an allowed diagram, a one-loop diagram. The solid lines correspond to a fermion propagator, the wavy lines to bosons.
Vacuums
In formal terms, a particle is considered to be an eigenstate of the particle number operator a†a, where a is the particle annihilation operator and a† the particle creation operator (sometimes collectively called ladder operators). In many cases, the particle number operator does not commute with the Hamiltonian for the system. This implies the number of particles in an area of space is not a well-defined quantity but, like other quantum observables, is represented by a probability distribution. Since these particles are not certain to exist, they are called virtual particles or vacuum fluctuations of vacuum energy. In a certain sense, they can be understood to be a manifestation of the time-energy uncertainty principle in a vacuum.
An important example of the "presence" of virtual particles in a vacuum is the Casimir effect. Here, the explanation of the effect requires that the total energy of all of the virtual particles in a vacuum can be added together. Thus, although the virtual particles themselves are not directly observable in the laboratory, they do leave an observable effect: Their zero-point energy results in forces acting on suitably arranged metal plates or dielectrics. On the other hand, the Casimir effect can be interpreted as the relativistic van der Waals force.
Pair production
Virtual particles are often popularly described as coming in pairs, a particle and antiparticle which can be of any kind. These pairs exist for an extremely short time, and then mutually annihilate, or in some cases, the pair may be boosted apart using external energy so that they avoid annihilation and become actual particles, as described below.
This may occur in one of two ways. In an accelerating frame of reference, the virtual particles may appear to be actual to the accelerating observer; this is known as the Unruh effect. In short, the vacuum of a stationary frame appears, to the accelerated observer, to be a warm gas of actual particles in thermodynamic equilibrium.
Another example is pair production in very strong electric fields, sometimes called vacuum decay. If, for example, a pair of atomic nuclei are merged to very briefly form a nucleus with a charge greater than about 140, (that is, larger than about the inverse of the fine-structure constant, which is a dimensionless quantity), the strength of the electric field will be such that it will be energetically favorable to create positron–electron pairs out of the vacuum or Dirac sea, with the electron attracted to the nucleus to annihilate the positive charge. This pair-creation amplitude was first calculated by Julian Schwinger in 1951.
Compared to actual particles
As a consequence of quantum mechanical uncertainty, any object or process that exists for a limited time or in a limited volume cannot have a precisely defined energy or momentum. For this reason, virtual particles – which exist only temporarily as they are exchanged between ordinary particles – do not typically obey the mass-shell relation; the longer a virtual particle exists, the more the energy and momentum approach the mass-shell relation.
The lifetime of real particles is typically vastly longer than the lifetime of the virtual particles. Electromagnetic radiation consists of real photons which may travel light years between the emitter and absorber, but (Coulombic) electrostatic attraction and repulsion is a relatively short-range force that is a consequence of the exchange of virtual photons .
| Physical sciences | Subatomic particles: General | Physics |
192438 | https://en.wikipedia.org/wiki/Shovel | Shovel | A shovel is a tool used for digging, lifting, and moving bulk materials, such as soil, coal, gravel, snow, sand, or ore. Most shovels are hand tools consisting of a broad blade fixed to a medium-length handle. Shovel blades are usually made of sheet steel or hard plastics and are very strong. Shovel handles are usually made of wood (especially specific varieties such as ash or maple) or glass-reinforced plastic (fiberglass).
Hand shovel blades made of sheet steel usually have a folded seam or hem at the back to make a socket for the handle. This fold also commonly provides extra rigidity to the blade. The handles are usually riveted in place. A T-piece is commonly fitted to the end of the handle to aid grip and control where the shovel is designed for moving soil and heavy materials. These designs can all be easily mass-produced.
The term shovel also applies to larger excavating machines called power shovels, which serve the same purpose—digging, lifting, and moving material. Although such modern power shovels as front-end loaders and excavators (including tractors that feature a loading bucket on one end and a backhoe for digging and placing material on the other) descend from steam shovels and perform similar work, they are not classified as shovels.
Hand shovels have been adapted for many different tasks and environments. They can be optimized for a single task or designed as cross-over or compromise multitaskers. They are commonly used in agriculture.
It is also utilized in archaeology to locate and excavate all subsurface dirt.
History
In the Neolithic age and earlier, a large animal's scapula (shoulder blade) was often used as a crude shovel or spade. Shovels at this time were often used for farming.
The later invention of purpose-built shovels was a ground-breaking development. Manual shoveling, often in combination with picking, was the chief means of excavation in construction until mechanization via steam shovels and later hydraulic equipment (excavators such as backhoes and loaders) gradually replaced most manual shoveling. The same is also true of the history of mining and quarrying and of bulk materials handling in industries such as steelmaking and stevedoring. Railroad cars and cargo holds containing ore, coal, gravel, sand, or grains were often loaded and unloaded this way. These industries did not always rely exclusively on such work, but such work was a ubiquitous part of them. Until the 1950s, manual shoveling employed large numbers of workers. Groups of workers called 'labor gangs' were assigned to whatever digging or bulk materials handling was needed in any given week, and dozens or hundreds of workers with hand shovels would do the kind of rapid excavating or materials handling that today is usually accomplished with powered excavators and loaders operated by a few skilled operators. Thus the cost of labor, even when each individual worker was poorly paid, was a tremendous expense of operations. Productivity of the business was tied mostly to labor productivity. It still often is even today; but in the past it was even more so. In industrial and commercial materials handling, hand shoveling was later replaced with loaders and backhoes.
Given the central importance and cost of manual labour in industry in the late 19th and early 20th centuries, the "science of shoveling" was something of great interest to developers of scientific management such as Frederick Winslow Taylor. Taylor, with his focus on time and motion study, took an interest in differentiating the many motions of manual labor to a far greater degree than others tended to. Managers might not care to analyze it (possibly motivated by the assumption that manual labor is intellectually simple work), and workers might not care to analyze it in any way that encouraged management to take away the prerogative in craft work for the craftsman to decide the details of his methods. Taylor realized that failing to analyze shoveling practice represented a missed opportunity to discover or synthesize best practices for shoveling, which could achieve highest productivity (value for dollar spent). It was Taylor and colleagues in the 1890s through 1910s that greatly expanded the existing idea of varied shovel designs with different-sized scoops, one for each material, based on the material's density. Under scientific management, it was no longer acceptable to use the same shovel for shoveling brown coal one day and gravel the next. Taylor said the increased worker productivity, and corresponding savings in wages paid, would offset the capital cost of maintaining two shovels.
During the Second Industrial Revolution around 1900, heavy equipment such as crawler excavators became available.
Shovels known as entrenching tools were made by the British in 1908. They were used by the Germans in World War I and World War II.
Types
| Technology | Agricultural tools | null |
192500 | https://en.wikipedia.org/wiki/Planetesimal | Planetesimal | Planetesimals () are solid objects thought to exist in protoplanetary disks and debris disks. Believed to have formed in the Solar System about 4.6 billion years ago, they aid study of its formation.
Formation
A widely accepted theory of planet formation, the planetesimal hypothesis of Viktor Safronov, states that planets form from cosmic dust grains that collide and stick to form ever-larger bodies. Once a body reaches around a kilometer in size, its constituent grains can attract each other directly through mutual gravity, enormously aiding further growth into moon-sized protoplanets. Smaller bodies must instead rely on Brownian motion or turbulence to cause the collisions leading to sticking. The mechanics of collisions and mechanisms of sticking are intricate. Alternatively, planetesimals may form in a very dense layer of dust grains that undergoes a collective gravitational instability in the mid-plane of a protoplanetary disk—or via the concentration and gravitational collapse of swarms of larger particles in streaming instabilities. Many planetesimals eventually break apart during violent collisions, as 4 Vesta and 90 Antiope may have, but a few of the largest ones may survive such encounters and grow into protoplanets and, later, planets.
Planetesimals in the Solar System
It has been inferred that about 3.8 billion years ago, after a period known as the Late Heavy Bombardment, most of the planetesimals within the Solar System had either been ejected from the Solar System entirely, into distant eccentric orbits such as the Oort cloud, or had collided with larger objects due to the regular gravitational nudges from the giant planets (particularly Jupiter and Neptune). A few planetesimals may have been captured as moons, such as Phoebe (a moon of Saturn) and many other small high-inclination moons of the giant planets.
Planetesimals that have survived to the current day are valuable to science because they contain information about the formation of the Solar System. Although their exteriors are subjected to intense solar radiation that can alter their chemistry, their interiors contain pristine material essentially untouched since the planetesimal was formed. This makes each planetesimal a 'time capsule', and their composition might reveal the conditions in the Solar Nebula from which our planetary system was formed. The most primitive planetesimals visited by spacecraft are the contact binary Arrokoth.
Definition of planetesimal
The word planetesimal is derived from the word infinitesimal and means an ultimately small fraction of a planet.
While the name is always applied to small bodies during the process of planet formation, some scientists also use the term planetesimal as a general term to refer to many small Solar System bodies – such as asteroids and comets – which are left over from the formation process. A group of the world's leading planet formation experts decided at a conference in 2006 on the following definition of a planetesimal:
A planetesimal is a solid object arising during the accumulation of orbiting bodies whose internal strength is dominated by self-gravity and whose orbital dynamics is not significantly affected by gas drag. This corresponds to objects larger than approximately 1 km in the solar nebula.
Bodies large enough not only to keep together by gravitation but to change the path of approaching rocks over distances of several radii start to grow faster. These bodies, larger than 100 km to 1000 km, are called embryos or protoplanets.
In the current Solar System, these small bodies are usually also classified by dynamics and composition, and may have subsequently evolved
to become comets, Kuiper belt objects or trojan asteroids, for example. In other words, some planetesimals became other types of body once planetary formation had finished, and may be referred to by either or both names.
The above definition is not endorsed by the International Astronomical Union, and other working groups may choose to adopt the same or a different definition. The dividing line between a planetesimal and protoplanet is typically framed in terms of the size and the stages of development that the potential planet has already gone through: planetesimals combine to form a protoplanet, and protoplanets continue to grow (faster than planetesimals).
| Physical sciences | Planetary science | Astronomy |
192524 | https://en.wikipedia.org/wiki/Colchicum | Colchicum | Colchicum ( or ) is a genus of perennial flowering plants containing around 160 species which grow from bulb-like corms. It is a member of the botanical family Colchicaceae, and is native to West Asia, Europe, parts of the Mediterranean coast, down the East African coast to South Africa and the Western Cape. In this genus, the ovary of the flower is underground. As a consequence, the styles are extremely long in proportion, often more than . All species in the genus are toxic.
Common names
The common names autumn crocus, meadow saffron and naked lady may be applied to the whole genus or to many of its species; they refer to the "naked" crocus-like flowers that appear in late summer or autumn, long before the strap-like foliage which appears in spring.
Colchicum and Crocus look alike and can be confused by the casual observer. To add to the confusion, there are autumn-flowering species of crocus. However, colchicums have 3 styles and 6 stamens, while crocuses have 1 style supporting 3 long stigmas and 3 stamens. In addition, the corm structures are quite different—in Colchicum, the corm is irregular, while in crocuses, the corm is like a flattened ball. Crocus is in the iris family, Iridaceae.
Etymology
The name of the genus derives from Κολχίς (Colchis), the Ancient Greek name for the region of კოლხეთი (Kolkhida) in modern Georgia (Caucasus). Colchis features in Greek mythology as the land to which the Argonauts journeyed in quest of the golden fleece and where Jason encountered Medea. The Greek toponym Colchis is thought by scholars to derive from the Urartian Qulḫa, pronounced "Kolcha" (guttural "ch" - as in Scots loch).
Relationships
Colchicum melanthioides, also known as Androcymbium melanthioides, is probably the best known species from the tropical regions. In contrast to most temperate colchicums, the flower and leaves are produced at the same time, the white flowers usually in a small corymb that is enclosed by white bracts. Close relatives such as Colchicum scabromarginatum (Androcymbium scabromarginatum) and Colchicum coloratum (Androcymbium burchellii) have flowers with very short stalks and may be pollinated by rodents.
Cultivation
Temperate colchicums are commonly grown in gardens as ornamental flowers. Species found in cultivation include:
C. × agrippinum
C. autumnale
C. × byzantinum
C. cilicicum
C. lusitanum
C. speciosum
C. tenorei
There are also cultivars and hybrids such as:-
C. 'Dick Trotter' (violet with white centre)
C. 'Disraeli' (purple white),
C. 'Giant' (red with white centre)
C. 'Harlekijn' (white with purple band)
C. 'Lilac Wonder' (lilac)
C. 'Pink Goblet' (violet-purple)
C. 'Poseidon' (purple)
C. 'Rosy Dawn' (rose pink)
C. 'Violet Queen' (purple)
C. 'Waterlily' (double, lilac-pink)
Those marked have gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017).
In the United Kingdom, the National Collection of colchicums is maintained at Felbrigg Hall, Norfolk.
Medicinal uses and poisonous properties
Plants in this genus contain toxic amounts of the alkaloid colchicine which is used pharmaceutically to treat gout and Familial Mediterranean fever. The use of the roots and seeds in traditional medicine is thought to have arisen due to the presence of this drug.
Its leaves, corm and seeds are poisonous. Murderer Catherine Wilson is thought to have used it to poison a number of victims in the 19th century. The species known to contain the most lethal amount of colchicine is C. autumnale.
Species
The following are the species included in the genus Colchicum. Many species previously classified in Androcymbium, Bulbocodium and Merendera were moved to Colchicum based on molecular genetic evidence. Androcymbium is currently considered a separate genus by some.
Colchicum × agrippinum (probably a hybrid of garden origin)
Colchicum alpinum DC. in J.B.A.M.de Lamarck & A.P.de Candolle
Colchicum androcymbioides (Valdés) K.Perss.
Colchicum antepense K.Perss.
Colchicum antilibanoticum Gomb.
Colchicum arenarium Waldst. & Kit.
Colchicum arenasii Fridl.
Colchicum asteranthum Vassiliades & K.M.Perss.
Colchicum atropurpureum Stapf ex Stearn (unresolved name)
Colchicum atticum Spruner ex Tommas.
Colchicum autumnale L.
Colchicum balansae Planch.
Colchicum baytopiorum C.D.Brickell
Colchicum bivonae Guss.
Colchicum boissieri Orph.
Colchicum bulbocodium Ker Gawl.
Colchicum burttii Meikle
Colchicum × byzantinum Ker Gawl.
Colchicum chalcedonicum Azn.
Colchicum chimonanthum K.Perss.
Colchicum chlorobasis K.Perss.
Colchicum cilicicum (Boiss.) Dammer
Colchicum confusum K.Perss.
Colchicum corsicum Baker
Colchicum cretense Greuter
Colchicum crocifolium Boiss.
Colchicum cupanii Guss.
Colchicum davisii C.D.Brickell
Colchicum decaisnei Boiss.
Colchicum doerfleri Halácsy
Colchicum dolichantherum K.Perss.
Colchicum eichleri (Regel) K.Perss.
Colchicum euboeum (Boiss.) K.Perss.
Colchicum fasciculare (L.) R.Br.
Colchicum feinbruniae K.Perss.
Colchicum figlalii (Varol) Parolly & Eren
Colchicum filifolium (Cambess.) Stef.
Colchicum freynii Bornm.
Colchicum gonarei Camarda
Colchicum graecum K.Perss.
Colchicum greuteri (Gabrieljan) K.Perss.
Colchicum haynaldii Heuff.
Colchicum heldreichii K.Perss.
Colchicum hierosolymitanum Feinbrun
Colchicum hirsutum Stef.
Colchicum hungaricum Janka
Colchicum ignescens K.Perss.
Colchicum imperatoris-friderici Siehe ex K.Perss.
Colchicum inundatum K.Perss.
Colchicum kesselringii Regel
Colchicum kotschyi Boiss.
Colchicum kurdicum (Bornm.) Stef.
Colchicum laetum Steven
Colchicum lagotum K.Perss.
Colchicum leptanthum K.Perss.
Colchicum lingulatum Boiss. & Spruner in P.E.Boissier
Colchicum longifolium Castagne
Colchicum lusitanum Brot.
Colchicum luteum Baker
Colchicum macedonicum Kosanin
Colchicum macrophyllum B.L.Burtt
Colchicum manissadjianii (Azn.) K.Perss.
Colchicum micaceum K.Perss.
Colchicum micranthum Boiss.
Colchicum minutum K.Perss.
Colchicum mirzoevae (Gabrieljan) K.Perss.
Colchicum montanum L.
Colchicum multiflorum Brot.
Colchicum munzurense K.Perss.
Colchicum nanum K.Perss.
Colchicum neapolitanum (Ten.) Ten.
Colchicum parlatoris Orph.
Colchicum parnassicum Sart., Orph. & Heldr. in P.E.Boissier
Colchicum paschei K.Perss.
Colchicum peloponnesiacum Rech.f. & P.H.Davis
Colchicum persicum Baker
Colchicum polyphyllum Boiss. & Heldr. in P.E.Boissier
Colchicum pulchellum K.Perss.
Colchicum pusillum Sieber
Colchicum raddeanum (Regel) K.Perss.
Colchicum rausii K.Perss.
Colchicum ritchii R.Br.
Colchicum robustum (Bunge) Stef.
Colchicum sanguicolle K.Perss.
Colchicum schimperi Janka ex Stef.
Colchicum serpentinum Woronow ex Miscz.
Colchicum sfikasianum Kit Tan & Iatroú
Colchicum sieheanum Hausskn. ex Stef.
Colchicum soboliferum (C.A.Mey.) Stef.
Colchicum speciosum Steven
Colchicum stevenii Kunth
Colchicum szovitsii Fisch. & C.A.Mey.
Colchicum trigynum (Steven ex Adam) Stearn
Colchicum triphyllum Kunze
Colchicum troodi Kotschy in F.Unger & C.G.T.Kotschy
Colchicum tunicatum Feinbrun
Colchicum turcicum Janka
Colchicum tuviae Feinbrun
Colchicum umbrosum Steven
Colchicum varians (Freyn & Bornm.) Dyer in B.D.Jackson
Colchicum variegatum L.
Colchicum wendelboi K.Perss.
Colchicum woronowii Bokeriya
Colchicum zahnii Heldr.
| Biology and health sciences | Monocots | null |
192575 | https://en.wikipedia.org/wiki/Illicium%20verum | Illicium verum | Illicium verum (star anise or badian, Chinese star anise, star anise seed, star aniseed and star of anise) is a medium-sized evergreen tree native to South China and northeast Vietnam. Its star-shaped pericarps harvested just before ripening is a spice that closely resembles anise in flavor. Its primary production country is China, followed by Vietnam and other Southeast Asian countries. Star anise oil is a highly fragrant oil used in cooking, perfumery, soaps, toothpastes, mouthwashes, and skin creams. Until 2012, when they switched to using genetically modified E. coli, Roche Pharmaceuticals used up to 90% of the world's annual star anise crop to produce oseltamivir (Tamiflu) via shikimic acid.
Etymology and nomenclature
Illicium comes from the Latin meaning "entice" or "seduce".
Verum means "true" or "genuine".
The name "badian" appears to derive, via French , from the apparently descriptive Chinese name for it, , , lit. "eight horns". However, a derivation from the Persian , "fennel", exists, with the Oxford English Dictionary indicating that its origin before that is unknown.
Description
Leaves are aromatic, simple and lanceolate, obovate-elliptic or elliptic, size of 5–15 cm × 2–5 cm, coriaceous to thickly coriaceous. The leaves are 5–15 cm × 1.5–5 cm, apex acute, lower side pubescent. Flowers are solitary, bisexual, pink to dark red, axillary or subterminal. The perianth has lobes 7–12, arranged spirally; stamens number of 11–20, arranged spirally, with short, thick filaments; carpels usually 8, free, arranged in a single whorl. Flower peduncle size is 1.5–4 cm, tepals number range from seven to twelve, and are broadly elliptic to broadly ovate, anthers size is 1–1.5 mm, pollen grains trisyncolpate.
The fruit is a capsule-like follicetum, star-shaped, reddish-brown, consisting of six to eight follicles arranged in a whorl. Each follicle is boat-shaped, 1–2 cm long, rough and rigid, color reddish-brown, with 1 seed, opening along the ventral edge when ripe. carpels size of 10 mm long, boat-shaped; they are hard and wrinkled, containing one seed. Seeds are brown, compressed ovoid, smooth, shiny and brittle with approximate size of 8–9 mm × 6 mm.
Differences with similar taxa:
Illicium anisatum had smaller fruits that does not form a regular star due to the abortion of some carpels. Also fruit follicles are not swollen in the middle and had a more pointed apex. Also usually had more than 8 follicles and the fruit has weaker odour. The seeds in Illicium anisatum are flat or almost spherical.
Use
Culinary use
Star anise contains anethole, the same compound that gives anise, an unrelated plant, its flavor. Star anise has come into use in the West as a less expensive substitute for anise in baking, as well as in liquor production, most distinctively in the production of the liqueur Galliano. Star anise enhances the flavor of meat.
It is used as a spice in preparation of biryani and masala chai all over the Indian subcontinent. It is widely used in Chinese cuisine, and in Malay and Indonesian cuisines. It is widely grown for commercial use in China, India, and most other countries in Asia. Star anise is an ingredient of the traditional five-spice powder of Chinese cooking. It is also a major ingredient in the making of phở, a Vietnamese noodle soup.
It is also used in the French recipe of mulled wine, vin chaud (hot wine). If allowed to steep in coffee, it deepens and enriches the flavor. The pods can be used in this manner multiple times by the potful or cup, as the ease of extraction of the taste components increases with the permeation of hot water.
Drug precursor
Star anise is the major source of the chemical compound shikimic acid, a primary precursor in the pharmaceutical synthesis of the anti-influenza drug oseltamivir (Tamiflu). An industrial method for the production of shikimic acid using fermentation of E. coli bacteria was discovered in 2005, and applied in the 2009 swine flu pandemic to address Tamiflu shortages, eventually reversing price increases for star anise as a raw material of shikimic acid. As of 2018, fermentation of E. coli was the manufacturing process of choice to produce shikimic acid for synthesis of Tamiflu.
Toxicity of related species
Illicium verum is not toxic. However, other related species are toxic.
Japanese star anise (Illicium anisatum), a similar tree, is highly toxic and inedible; in Japan, it has instead been burned as incense. Cases of illness, including "serious neurological effects, such as seizures", reported after using star anise tea may be a result of deliberate economically motivated adulteration with this species. Japanese star anise contains the neurotoxin anisatin, which also causes severe inflammation of the kidneys (nephritis), urinary tract, and digestive organs when ingested.
Swamp star anise Illicium parviflorum, a similar tree found in the southern United States, is highly toxic and should not be used for folk remedies or as a cooking ingredient.
ISO Standardization
ISO 676:1995 – contains the information about the nomenclature of the variety and cultivars
Identification
Refer to the 4th edition of the European Pharmacopoeia (1153)
Differentiation from other species
Joshi et al. have used fluorescent microscopy and gas chromatography to distinguish the species, while Lederer et al. employed thin layer chromatography with HPLC-MS/MS.
Specifications
ISO 11178:1995 – a specification for its dried fruits
| Biology and health sciences | Herbs and spices | Plants |
192605 | https://en.wikipedia.org/wiki/Pharmacist | Pharmacist | A pharmacist, also known as a chemist in Commonwealth English, is a healthcare professional who is knowledgeable about preparation, mechanism of action, clinical usage and legislation of medications in order to dispense them safely to the public and to provide consultancy services. A pharmacist also often serves as a primary care provider in the community and offers services, such as health screenings and immunizations.
Pharmacists undergo university or graduate-level education to understand the biochemical mechanisms and actions of drugs, drug uses, therapeutic roles, side effects, potential drug interactions, and monitoring parameters. This is mated to anatomy, physiology, and pathophysiology. Pharmacists interpret and communicate this specialized knowledge to patients, physicians, and other health care providers.
Among other licensing requirements, different countries require pharmacists to hold either a Bachelor of Pharmacy, Master of Pharmacy, or a Doctor of Pharmacy degree.
The most common pharmacist positions are that of a community pharmacist (also referred to as a retail pharmacist, first-line pharmacist or dispensing chemist), or a hospital pharmacist, where they instruct and counsel on the proper use and adverse effects of medically prescribed drugs and medicines. In most countries, the profession is subject to professional regulation. Depending on the legal scope of practice, pharmacists may contribute to prescribing (also referred to as "pharmacist prescribers") and administering certain medications (e.g., immunizations) in some jurisdictions. Pharmacists may also practice in a variety of other settings, including industry, wholesaling, research, academia, formulary management, military, and government.
Nature of work
Historically, the fundamental role of pharmacists as a healthcare practitioner was to check and distribute drugs to doctors for medication that had been prescribed to patients. In more modern times, pharmacists advise patients and health care providers on the selection, dosages, interactions, and side effects of medications, and act as a learned intermediary between a prescriber and a patient. Pharmacists monitor the health and progress of patients to ensure the safe and effective use of medication. Pharmacists may practice compounding; however, many medicines are now produced by pharmaceutical companies in a standard dosage and drug delivery form. In some jurisdictions, pharmacists have prescriptive authority to either independently prescribe under their own authority or in collaboration with a primary care physician through an agreed upon protocol called a collaborative practice agreement.
Increased numbers of drug therapies, aging but more knowledgeable and demanding populations, and deficiencies in other areas of the health care system seem to be driving increased demand for the clinical counseling skills of the pharmacist. One of the most important roles that pharmacists are currently taking on is one of pharmaceutical care. Pharmaceutical care involves taking direct responsibility for patients and their disease states, medications, and management of each to improve outcomes. Pharmaceutical care has many benefits that may include but are not limited to: decreased medication errors; increased patient compliance in medication regimen; better chronic disease state management, including hypertension and other cardiovascular disease risk factors; strong pharmacist–patient relationship; and decreased long-term costs of medical care.
Pharmacists are often the first point-of-contact for patients with health inquiries. Thus pharmacists have a significant role in assessing medication management in patients, and in referring patients to physicians. These roles may include, but are not limited to:
clinical medication management, including reviewing and monitoring of medication regimens
assessment of patients with undiagnosed or diagnosed conditions, and ascertaining clinical medication management needs
specialized monitoring of disease states, such as dosing drugs in kidney and liver failure
compounding medicines
providing pharmaceutical information
providing patients with health monitoring and advice, including advice and treatment of common ailments and disease states
supervising pharmacy technicians and other staff
oversight of dispensing medicines on prescription
provision of and counseling about non-prescription or over-the-counter drugs
education and counseling for patients and other health care providers on optimal use of medicines (e.g., proper use, avoidance of overmedication)
referrals to other health professionals if necessary
pharmacokinetic evaluation
promoting public health by administering immunizations
constructing drug formularies
designing clinical trials for drug development
working with federal, state, or local regulatory agencies to develop safe drug policies
ensuring correctness of all medication labels including auxiliary labels
member of inter-professional care team for critical care patients
symptom assessment leading to medication provision and lifestyle advice for community-based health concerns (e.g. head colds, or smoking cessation)
staged dosing supply (e.g. opioid substitution therapy)
Education and credentialing
The role of pharmacy education, pharmacist licensing, and continuing education vary from country to country and between regions/localities within countries. In most countries, pharmacists must obtain a university degree at a pharmacy school or related institution, and/or satisfy other national/local credentialing requirements. In many contexts, students must first complete pre-professional (undergraduate) coursework, followed by about four years of professional academic studies to obtain a degree in pharmacy (such as Doctorate of Pharmacy). In the European Union, pharmacists are required to hold a Masters of Pharmacy, which allows them to practice in any other E.U. country, pending professional examinations and language tests in the country in which they want to practice. Pharmacists are educated in pharmacology, pharmacognosy, chemistry, organic chemistry, biochemistry, pharmaceutical chemistry, microbiology, pharmacy practice (including drug interactions, medicine monitoring, medication management), pharmaceutics, pharmacy law, pathophysiology, physiology, anatomy, drug delivery, pharmaceutical care, nephrology, hepatology, and compounding of medications. Additional curriculum may cover diagnosis with emphasis on laboratory tests, disease state management, therapeutics and prescribing (selecting the most appropriate medication for a given patient).
Upon graduation, pharmacists are licensed, either nationally or regionally, to dispense medication of various types in the areas they have trained for.
Some may undergo further specialized training, such as in cardiology or oncology or long-term care. Specialties include:
Academic pharmacist
Clinical pharmacy specialist
Community pharmacist
Compounding pharmacist
Consultant pharmacist
Long-term care pharmacist
Drug information pharmacist
Home health pharmacist
Hospital pharmacist
Industrial pharmacist
Informatics pharmacist
Managed care pharmacist
Military pharmacist
Nuclear pharmacist
Oncology pharmacist
Regulatory-affairs pharmacist
Veterinary pharmacist
Pharmacist clinical pathologist
Pharmacist clinical toxicologist
Training and practice by country
Armenia
The Ministry of Education and Ministry of Health oversee pharmacy school accreditation in Armenia. Pharmacists are expected to have competency in the WHO Model List of Essential Medicines (EML), the use of Standard Treatment Guidelines, drug information, clinical pharmacy, and medicine supply management. There are currently no laws requiring pharmacists to be registered, but all pharmacies must have a license to conduct business. According to a World Health Organization (WHO) report from 2010, there are 0.53 licensed pharmacists and 7.82 licensed pharmacies per 10,000 people in Armenia. Pharmacists are able to substitute for generic equivalents at point of dispensing.
Australia
The Australian Pharmacy Council is the independent accreditation agency for Australian pharmacists. The accreditation standards for Australian pharmacy degrees include compulsory clinical placements. with an emphasis on encouraging rural experiences to develop a rural workforce. It conducts a written examination on behalf of the Pharmacy Board of Australia towards eligibility for registration. The Pharmacy Board of Australia conducts an oral examination at the end of the intern year as the last hurdle prior to registration. The Pharmaceutical Society of Australia provides continuing education programs for pharmacists. The number of full-time equivalent pharmacists working in Australia over the past decade has remained stable. Pharmacy practice is described by the practice standards and guidelines including those from the Pharmaceutical Society of Australia.
The Australian Pharmacy Council is developing accreditation standards for pharmacists to prescribe and for pharmacists to work in aged care. The aged care accreditation standards are being developed in preparation for pharmacists working in residential aged care settings to ensure that they are adequately prepared.
There is a shortage of pharmacists at present leaving many jobs unfilled. Despite many pharmacists leaving the profession, pharmacists remain optimistic about their profession. Contract and casual work is becoming more common. A contract pharmacist is self-employed and often called a locum; these pharmacists may be hired for one shift or for a longer period of time. The number of pharmacists has stayed stable over a number of years.
Canada
The Canadian Pharmacists Association (CPhA) is the national professional organization for pharmacists in Canada. Specific requirements for practice vary across provinces, but generally include a bachelor's (BSc Pharm) or Doctor of Pharmacy (PharmD) degree from one of 10 Canadian universities offering a pharmacy program, successful completion of a national board examination through the Pharmacy Examining Board of Canada (PEBC) (Quebec being the exception), practical experience through an apprenticeship/internship program, and fluency in French or English. International pharmacy graduates can begin their journey of becoming licensed to practice in Canada by enrolling with the National Association of Pharmacy Regulatory Authorities (NAPRA) Pharmacists' Gateway Canada. The vast majority (~70%) of Canada's licensed pharmacists work in community pharmacies, another 15% work in hospital, and the remainder work in other settings such as industry, government, or universities. Pharmacists' scope of practice varies widely among the 13 provinces and territories and continues to evolve with time. As a result of pharmacists' expanding scope and knowledge application, there has been a purposeful effort to transition the professional programs in Canadian pharmacy schools to offer doctors of pharmacy over baccalaureate curriculums to ensure graduates have the most up to date level of training to match the increasing practice requirements.
European Union
The pharmacist qualification in the European Union is regulated by the Directive 2005/36/EC, where Section 7 Article 44(2) mandates at least five years of training including "four years of full-time theoretical and practical training" and "six-month traineeship in a pharmacy which is open to the public or in a hospital, under the supervision of that hospital's pharmaceutical department". The training of pharmacist must include at least: "Plant and animal biology, Physics, General and inorganic chemistry, Organic chemistry, Analytical chemistry, Pharmaceutical chemistry, including analysis of medicinal products, General and applied biochemistry (medical), Anatomy and physiology; medical terminology, Microbiology, Pharmacology and pharmacotherapy, Pharmaceutical technology, Toxicology, Pharmacognosy, Legislation and, where appropriate, professional ethics", which can be adapted to "scientific and technical progress" according to procedure in Directive 2005/36/EC.
Germany
In Germany, the education and training is divided into three sections, each ending with a state examination:
University: Basic studies (at least four semesters)
University: Main studies (at least four semesters)
Community Pharmacy / Hospital Pharmacy / Industry: Practical training (12 months; 6 months in a Community Pharmacy).
After the third state examination a person must become licensed as an RPh ("registered pharmacist") for a licence to practice pharmacy.
Today, many pharmacists work as employees in public pharmacies. They will be paid according to the labour agreement of Adexa and employer associations.
Poland
Polish pharmacists have to complete a -year Master of Pharmacy Programme at medical university and obtain the right to practice as a pharmacist in Poland from District Pharmaceutical Council. The Programme includes 6 months of pharmacy training. The Polish name for the Master of Pharmacy Degree (M.Pharm.) is magister farmacji (mgr farm). Not only pharmacists, but also pharmaceutical technicians are allowed to dispense prescription medicines, except for narcotics, psychotropics and very potent medicines. Pharmacists approve prescriptions fulfilled by pharmaceutical technicians subsequently. Pharmaceutical technicians have to complete 2 years of post-secondary occupational school and 2 years of pharmacy training afterwards. Pharmacists are eligible to prescribe medicines in exceptional circumstances. All Polish pharmacies are obliged to produce compound medicines. Most pharmacists in Poland are pharmacy managers and are responsible for pharmacy marketing in addition to traditional activities. To become a pharmacy manager in Poland, a pharmacist is expected to have at least 5 years of professional experience. All pharmacists in Poland have to maintain an adequate knowledge level by participating in various university- and industry-based courses and arrangements or by undergoing postgraduate specialization.
Sweden
In Sweden, the national board of health and welfare regulates the practice of all legislated health care professionals, and is also responsible for registration of pharmacists in the country. The education to become a licensed pharmacist is regulated by the European Union, and states that minimum educational requirements are five years of university studies in a pharmacy program, of which six months must be a pharmacy internship. To be admitted to pharmacy studies, students must complete a minimum of three years of gymnasium, similar to high school (school for about 15–20-year-old students) program in natural science after elementary school (6–16-year-olds). Only three universities in the whole of Sweden offer a pharmacy education, Uppsala University, where the Faculty of Pharmacy is located, the University of Gothenburg, and Umeå University. In Sweden, pharmacists are called Apotekare. At pharmacies in Sweden, pharmacists work together with another class of legislated health care professionals called Receptarier, in English so-called prescriptionists, who have completed studies equal to a Bachelor of Science in pharmacy, i.e., three years of university. Prescriptionists also have dispensing rights in Sweden, Norway, Finland and Iceland. The majority of the staff in a pharmacy are Apotekstekniker or "pharmacy technicians" with a three -semester education at a vocational college. Pharmacy technicians do not have dispensing rights in Sweden but are allowed to advise on and sell over-the-counter medicines.
Japan
History
In ancient Japan, the men who fulfilled roles similar to pharmacists were respected. The place of pharmacists in society was settled in the Taihō Code (701) and re-stated in the Yōrō Code (718). Ranked positions in the pre-Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists — and even pharmacist assistants — were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor.
Contemporary
As of 1997, 46 universities of pharmacy in Japan graduated about 8000 students annually. Contemporary practice of clinical pharmacists in Japan (as evaluated in September 2000) focuses on dispensing of drugs, consultation with patients, supplying drug information, advising on prescription changes and amending prescriptions. These practices have been linked to decreases in the average number of drugs in prescriptions, drug costs and incidence of adverse drug events.
Nigeria
Training to become a registered pharmacist in Nigeria involves a five-year course after six years of secondary/high school or four years after eight years of secondary/high school (i.e. after 2 years of Advanced-level studies in accredited Universities). The degree awarded by most pharmacy schools is a Bachelor of Pharmacy Degree (B.Pharm.) However, in the near future, all schools will offer a 6-year first Degree course leading to the award of a Pharm.D (Doctor of Pharmacy Degree). The University of Benin has started the Pharm.D programme with other pharmacy schools planning to start soon. The Pharmacy Degree in Nigeria is unclassified i.e. awarded without first class, second class upper, etc., however graduates could be awarded Pass with Distinctions in specific fields such as Pharmaceutics, Pharmacology, medicinal chemistry etc. Pharmacy Graduates are required to undergo 1 year of Tutelage under the supervision of an already Registered Pharmacist(a preceptor) in a recognized and designated Institution before they can become Registered Pharmacists. The Profession is Regulated by a Government Statutory body called the Pharmacists Council of Nigeria. The West African Post Graduate College of Pharmacy runs post-registration courses on advanced-level practice in various fields of pharmacy. It is a college jointly funded by a number of Countries in the West Africa sub-region. There are thousands of Nigerian-trained pharmacists registered and practicing in countries such as the US, the UK, Canada etc., due to the relatively poor public sector salaries in Nigeria.
Pakistan
In Pakistan, the Pharm.D. (Doctor of Pharmacy) degree is a graduate-level professional doctorate degree. Twenty-one universities are registered with the Pharmacy Council of Pakistan for imparting Pharmacy courses. In 2004 the Higher Education Commission of Pakistan and the Pharmacy Council of Pakistan revised the syllabus and changed the 4-year B.Pharmacy (Bachelor of Pharmacy) Program to a 5-year Pharm.D. (Doctor of Pharmacy) program. All 21 universities have started the 5-year Pharm.D Program. In 2011 the Pharmacy Council of Pakistan approved the awarding of a Doctor of Pharmacy degree, a five-year programme at the Department of Pharmacy, University of Peshawar.
Switzerland
In Switzerland, the federal office of public health regulates pharmacy practice. Four Swiss universities offer a major in pharmaceutical studies, the University of Basel, the University of Geneva, the University of Lausanne and the ETH Zurich. To major in pharmaceutical studies takes at least five years. Students spend their last year as interns in a pharmacy combined with courses at the university, with focus on the validation of prescriptions and the manufacturing of pharmaceutical formulations. Since all public health professions are regulated by the government it is also necessary to acquire a federal diploma in order to work in a pharmacy. It is not unusual for pharmaceutical studies majors to work in other fields such as the pharmaceutical industry or in hospitals. Pharmacists work alongside pharma assistants, an apprenticeship that takes three years to complete. Pharmacists can further specialize in various fields; this is organized by PharmaSuisse, the pharmacists' association of Switzerland.
Tanzania
In Tanzania, pharmacy practice is regulated by the national Pharmacy Board, which is also responsible for registration of pharmacists in the country. By international standards, the density of pharmacists is very low, with a mean of 0.18 per 10,000 population. The majority of pharmacists are found in urban areas, with some underserved regions having only 2 pharmacists per region. According to 2007–2009 data, the largest group of pharmacists was employed in the public sector (44%). Those working in private retail pharmacies were 23%, and the rest were mostly working for private wholesalers, pharmaceutical manufacturers, in academia/teaching, or with faith-based or non-governmental facilities. The salaries of pharmacists varied significantly depending on the place of work. Those who worked in the academia were the highest paid followed by those who worked in the multilateral non-governmental organizations. The public sector including public retail pharmacies and faith based organizations paid much less. The Ministry of Health salary scale for medical doctors was considerably higher than that of pharmacists despite having a difference of only one year of training.
Trinidad and Tobago
In Trinidad and Tobago, pharmacy practice is regulated by the Pharmacy Board of Trinidad and Tobago, which is responsible for the registration of pharmacists in the twin islands. The University of the West Indies in St. Augustine offers a 4-year Bachelor of Science in Pharmacy as the sole practicing degree of pharmacy. Graduates undertake a 6-month internship, known as pre-registration, under the supervision of a registered pharmacist, at a pharmacy of their choosing, whether community or institutional. After completion of the required pre-registration period, the graduate can then apply to the Pharmacy Board to become a registered pharmacist. After working 1 calendar year as a registered pharmacist, the individual can become a registered, responsible pharmacist. Being a registered, responsible pharmacist allows the individual to license a pharmacy and be a pharmacist-in-charge.
United Kingdom
In British English (and to some extent Australian English), the professional title known as "pharmacist" is also known as "dispensing chemist" or, more commonly, "chemist". A dispensing chemist usually operates from a pharmacy or chemist's shop, and is allowed to fulfil medical prescriptions and sell over-the-counter drugs and other health-related goods. Pharmacists can undertake additional training to allow them to prescribe medicines for specific conditions.
Practices
In the United Kingdom, most pharmacists working in the National Health Service practice in hospital pharmacy or community pharmacy. The Royal Commission on the National Health Service in 1979 reported that there were nearly 3,000 pharmacists employed in the hospital and community health service in the UK at that time. They were enthusiastic about the idea that pharmacists might develop their role of giving advice to the public.
The new professional role for pharmacist as prescriber has been recognized in the UK since May 2006, called the "Pharmacist Independent Prescriber". Once qualified, a pharmacist independent prescriber can prescribe any licensed medicine for any medical condition within their competence. This includes controlled drugs except schedule 1 and prescribing certain drugs for the treatment of addiction (cocaine, diamorphine and dipipanone).
Education and registration
Pharmacists, pharmacy technicians and pharmacy premises in the United Kingdom are regulated by the General Pharmaceutical Council (GPhC) for England, Scotland and Wales and by the Pharmaceutical Society of Northern Ireland for Northern Ireland. The role of regulatory and professional body on the mainland was previously carried out by the Royal Pharmaceutical Society of Great Britain, which remained as a professional body after handing over the regulatory role to the GPhC in 2010.
The following criteria must be met for qualification as a pharmacist in the United Kingdom (the Northern Irish body and the GPhC operate separately but have broadly similar registration requirements):
Successful completion of a 4-year Master of Pharmacy degree at a GPhC accredited university. Pharmacists holding degrees in pharmacy from overseas institutions are able to fulfill this stage by undertaking the Overseas Pharmacist Assessment Programme (OSPAP), which is a one-year postgraduate diploma. On completion of the OSPAP, the candidate would proceed with the other stages of the registration process in the same manner as a UK student.
Completion of a 52-week preregistration training period. This is a period of paid or unpaid employment, in an approved hospital or community pharmacy under the supervision of a pharmacist tutor. During this time the student must collect evidence of having met certain competency standards set by the GPhC.
A pass mark in the GPhC registration assessment (formally an exam). This includes a closed-book paper and an open-book/mental calculations paper (using the British National Formulary and the GPhC's "Standards of Conduct, Ethics and Performance" document as reference sources). The student must achieve an overall mark of 70%, which must include at least 70% in the calculations section of the open-book paper. From June 2016, the assessment will involve two papers, as before but the use of a calculator will now be allowed. However, reference sources will no longer be allowed in the assessment. Instead, relevant extracts of the British National Formulary will be provided within the assessment paper.
Satisfactorily meeting the GPhC's Fitness to Practice Standards.
United States
In 2014 the United States Bureau of Labor Statistics revealed that there were 297,100 American pharmacist jobs. By 2024 that number is projected to grow by 3%. The majority (65%) of those pharmacists work in retail settings, mostly as salaried employees but some as self-employed owners. About 22% work in hospitals, and the rest mainly in mail-order or Internet pharmacies, pharmaceutical wholesalers, practices of physicians, and the Federal Government.
All graduating pharmacists must now obtain the Doctor of Pharmacy (Pharm.D.) degree before they are eligible to sit for the North American Pharmacist Licensure Examination (NAPLEX) to enter into pharmacy practice. In addition, pharmacists are subject to state-level jurisprudence exams in order to practice from state to state.
Pharmacy School Accreditation
The Accreditation Council for Pharmacy Education (ACPE) has operated since 1932 as the accrediting body for schools of pharmacy in the United States. The mission of ACPE is "To assure and advance excellence in education for the profession of pharmacy". ACPE is recognized for the accreditation of professional degree programs by the United States Department of Education (USDE) and the Council for Higher Education Accreditation (CHEA). Since 1975, ACPE has also been the accrediting body for continuing pharmacy education. The ACPE board of directors are appointed by the American Association of Colleges of Pharmacy (AACP), the American Pharmacists Association (APhA), the National Association of Boards of Pharmacy (NABP) (three appointments each), and the American Council on Education (one appointment). To obtain licensure in the United States, applicants for the North American Pharmacist Licensure Examination (NAPLEX) must graduate from an ACPE accredited school of pharmacy. ACPE publishes standards that schools of pharmacy must comply with to gain accreditation.
A Pharmacy school pursuing accreditation must first apply and be granted Pre-candidate status. These schools have met all the requirements for accreditation, but have not yet enrolled any students. This status indicates that the school of pharmacy has developed its program in accordance with the ACPE standards and guidelines. Once a school has enrolled students, but has not yet had a graduating class, they may be granted Candidate status. The expectations of a Candidate program are that they continue to mature in accordance with stated plans. The graduates of a Candidate program are the same as those of fully accredited programs. Full accreditation is granted to a program once they have demonstrated they comply with the standards set forth by ACPE.
The customary review cycle for established accredited programs is six years, whereas for programs achieving their initial accreditation this cycle is two years. These are comprehensive on-site evaluations of the programs. Additional evaluations may be conducted at the discretion of ACPE in the interim between comprehensive evaluations.
Education
Acceptance into a doctorate of pharmacy program depends upon completing specific prerequisites or obtaining a transferable bachelor's degree. Pharmacy school is four years of graduate school (accelerated Pharmacy Schools go January to January and are only 3 years), which include at least one year of practical experience. Graduates receive a Doctorate of Pharmacy (PharmD) upon graduation. Most schools require students to take a Pharmacy College Admissions Test PCAT and complete 90 credit hours of university coursework in the sciences, mathematics, composition, and humanities before entry into the PharmD program. Due to the large admittance requirements and highly competitive nature of the field, most pharmacy students complete a bachelor's degree before entry to pharmacy school.
Possible prerequisites:
Anatomy
Physiology
Biochemistry
Biology
Immunology
Chemical engineering
Economics
Pathophysiology
Physics
Humanities
Microbiology
Molecular biology
Organic chemistry
Physical chemistry
Statistics
Calculus
Besides taking classes, additional requirements before graduating may include a certain number of hours for community service, e.g., working in hospitals, clinics, and retail.
Estimated timeline: 4 years undergraduate + 4 years doctorate + 1–2 years residency + 1–3 years fellowship = 8–13 years
A doctorate of pharmacy (except non-traditional, i.e. transferring a license from another country) is the only degree accepted by the National Associate of Boards of Pharmacy NABP to be eligible to "sit" for the North American Pharmacist Licensure Examination (NAPLEX). Previously the United States had a 5-year bachelor's degree in pharmacy. For BS Pharmacy graduates currently licensed in US, there are 10 Universities offering non-traditional doctorate degree programs via part-time, weekend or on-line programs. These are programs fully accredited by Accreditation Council for Pharmacy Education (ACPE) but only available to current BS Pharmacy graduates with a license to practice pharmacy. Some institutions still offer 6 year accelerated PharmD programs.
The current Pharm.D. degree curriculum is considerably different from that of the prior BS in pharmacy. It now includes extensive didactic clinical preparation, a full year of hands-on practice experience in a wider array of healthcare settings, and a greater emphasis on clinical pharmacy practice pertaining to pharmacotherapy optimization. Legal requirements in the US to becoming a pharmacist include: graduating from an accredited PharmD program, conducting a specified number of internship hours under a licensed pharmacist (i.e. 1800 hours in some states), passing the NAPLEX, and passing a Multi-state Pharmacy Jurisprudence Exam MPJE. Arkansas, California, and Virginia have their own exams instead of the MPJE; in those states, pharmacists must pass the Arkansas Jurisprudence Exam, the California Jurisprudence Exam, or the Virginia Pharmacy Law Exam.
Residency is an option for post-graduates that is typically 1–2 years in length. A residency gives licensed pharmacists decades of clinical experience in an extremely condensed timeframe of only a few short years. In order for new graduates to remain competitive, employers generally favor residency trained applicants for clinical positions. The profession is moving toward resident-trained pharmacists who wish to provide direct patient care clinical services. In 1990, the American Association of Colleges of Pharmacy (AACP) required the new professional degree. Graduates from a PharmD program may also elect to do a fellowship that is geared toward research. Fellowships can varying in length but last 1–3 years depending on the program and usually require 1 year of residency at minimum.
Specialization and credentialing
American pharmacists can become certified in recognized specialty practice areas by passing an examination administered by one of several credentialing boards.
The Board of Pharmacy Specialties certifies pharmacists in thirteen specialties:
Ambulatory care pharmacy
Cardiology pharmacy
Compounded sterile preparations pharmacy
Critical care pharmacy
Geriatric pharmacy
Infectious diseases pharmacy
Nuclear pharmacy
Nutrition support pharmacy
Oncology pharmacy
Pediatric pharmacy
Pharmacotherapy
Psychiatric pharmacy
Solid organ transplant pharmacy
The American Board of Applied Toxicology certifies pharmacists and other medical professionals in applied toxicology.
Expanding Scope of Practice
Vaccinations
As of 2016, all 50 states and the District of Columbia permit pharmacists to provide vaccination services, but specific protocols vary between states.
California
All licensed California pharmacists can perform the following:
Order and interpret drug therapy related tests
Furnish smoking cessation aids (such as nicotine replacement therapy)
Furnish oral self-administered contraception (birth control pills)
Furnish travel medications recommended by the CDC
Administer vaccinations pursuant to the latest CDC standards for anyone ages 3+
The passage of Assembly Bill 1535 (2014) authorizes pharmacists in California to furnish naloxone without a physician's prescription.
With the passage of Senate Bill 159 in 2019, pharmacists in California are authorized to furnish pre-exposure prophylaxis (PrEP) and post-exposure prophylaxis (PEP) to patients without a physician's prescription. In order to be eligible to dispense, a pharmacists must first "complete a training program approved by the" California State Board of Pharmacy.
California pharmacists can apply for Advanced Practice Pharmacist (APh) licenses from the California State Board of Pharmacy. Senate Bill 493, written by Senator Ed Hernandez, established a section on the Advanced Practice Pharmacist and outlines the definition, scope of practice, qualifications, and regulations of those holding this license. An APh can:
Perform patient assessments
Refer patients to other healthcare providers
Participate in the evaluation and management of diseases and health conditions in collaboration with other health care providers
Initiate, adjust, or discontinue therapy pursuant to the regulations outlined in the bill
To qualify for an advanced practice pharmacist license in California, the applicant must be in good standing with the State Board of pharmacy, have an active pharmacist license, and fulfill two of three requirements, including certification in their area clinical practice. The license must be renewed every 2 years, and the APh applying for renewal must complete 10 hours of continuing education in at least one area relevant to their clinical practice.
Earnings and wages
According to a 2010 PharmacyWeek survey, pharmacists were paid the following average annual salaries, depending on their positions:
Directors of Pharmacy $125,200
Retail Staff Pharmacists $113,600
Hospital Staff Pharmacists $111,700
Mail Order Staff Pharmacists $109,300
Clinical Pharmacists $113,400
The American Pharmacy Journal of Education in 2014 reported the average salary around $112,160.
According to the US Bureau of Labor Statistics' Occupational Outlook Handbook, 2016–17 Edition, Median annual wages of wage and salary pharmacists in May 2015 were $121,500.
In 2020 US News and World Report noted that the median pharmacist salary was $128,710. The top 25 percent of pharmacist earners made $147,690 that year, while the lowest 25 percent made $112,690.
Vietnam
School students must take a national exam to enter a university of pharmacy or the pharmacy department of a university of medicine and pharmacy. About 5–7% of students can pass the exam. There are 3 aspects to the exam. These are on math, chemistry, and physics or biology. After being trained at the university for 5 years, successful students receive a bachelor's degree in pharmacy. Or they are university pharmacists (university pharmacist to discriminate between college pharmacist or vocational pharmacist in some countries of the world these trainee pharmacists are called pharmacist assistants). An alternative method of obtaining a bachelor's degree is as follows. School pupils study at a college of pharmacy or a vocational school of pharmacy. After attending the school or college they go to work in a pharmacy, and with two years of practice they could take an exam to enter university of pharmacy or the pharmacy department of a university of medicine and pharmacy. This exam is easier than the national one. Passing the exam they continue studying to gain 3-year bachelor's degrees or 4-year bachelor's degrees. This degree is considered equivalent to a 5-year bachelor's degree.
Notable pharmacists
Charles Alderton (1857-1941), American inventor of the soft drink Dr Pepper
Caleb Bradham (1867-1934), American inventor of the soft drink Pepsi
Ikililou Dhoinine (born 1962), Comorian politician
Pravin Gordhan (born 1949), minister in South African government
Luke Howard (1772–1864), "the father of meteorology"
Hubert Humphrey (1911-1978), U.S. Vice President 1965–69
David Jack (1924–2011), leader of research that developed major asthma drugs
Edna O'Brien (born 1930), Irish author and playwright
Hans Christian Ørsted (1777–1851), Danish physicist who discovered electromagnetism
Tadeusz Pankiewicz (1908-1993), Polish pharmacist in the Kraków Ghetto and activist during World War II
John Pemberton (1831-1888), American inventor of the soft drink Coca-Cola
Friedrich Sertürner (1783-1841), German chemist who discovered morphine
Joseph Swan (1828–1914), inventor of the incandescent light bulb
Henri Moissan (1852–1907), chemist and pharmacist who won the 1906 Nobel Prize in Chemistry
| Biology and health sciences | Drugs and pharmacology | null |
192753 | https://en.wikipedia.org/wiki/Seven%20Bridges%20of%20K%C3%B6nigsberg | Seven Bridges of Königsberg | The Seven Bridges of Königsberg is a historically notable problem in mathematics. Its negative resolution by Leonhard Euler, in 1736, laid the foundations of graph theory and prefigured the idea of topology.
The city of Königsberg in Prussia (now Kaliningrad, Russia) was set on both sides of the Pregel River, and included two large islands—Kneiphof and Lomse—which were connected to each other, and to the two mainland portions of the city, by seven bridges. The problem was to devise a walk through the city that would cross each of those bridges once and only once.
By way of specifying the logical task unambiguously, solutions involving either
reaching an island or mainland bank other than via one of the bridges, or
accessing any bridge without crossing to its other end
are explicitly unacceptable.
Euler proved that the problem has no solution. The difficulty he faced was the development of a suitable technique of analysis, and of subsequent tests that established this assertion with mathematical rigor.
Euler's analysis
Euler first pointed out that the choice of route inside each land mass is irrelevant and that the only important feature of a route is the sequence of bridges crossed. This allowed him to reformulate the problem in abstract terms (laying the foundations of graph theory), eliminating all features except the list of land masses and the bridges connecting them. In modern terms, one replaces each land masses with an abstract "vertex" or node, and each bridge with an abstract connection, an "edge", which only serves to record which pair of vertices (land masses) is connected by that bridge. The resulting mathematical structure is a graph.
→
→
Since only the connection information is relevant, the shape of pictorial representations of a graph may be distorted in any way, without changing the graph itself. Only the number of edges (possibly zero) between each pair of nodes is significant. It does not, for instance, matter whether the edges drawn are straight or curved, or whether one node is to the left or right of another.
Next, Euler observed that (except at the endpoints of the walk), whenever one enters a vertex by a bridge, one leaves the vertex by a bridge. In other words, during any walk in the graph, the number of times one enters a non-terminal vertex equals the number of times one leaves it. Now, if every bridge has been traversed exactly once, it follows that, for each land mass (except for the ones chosen for the start and finish), the number of bridges touching that land mass must be even (half of them, in the particular traversal, will be traversed "toward" the landmass; the other half, "away" from it). However, all four of the land masses in the original problem are touched by an odd number of bridges (one is touched by 5 bridges, and each of the other three is touched by 3). Since, at most, two land masses can serve as the endpoints of a walk, the proposition of a walk traversing each bridge once leads to a contradiction.
In modern language, Euler shows that the possibility of a walk through a graph, traversing each edge exactly once, depends on the degrees of the nodes. The degree of a node is the number of edges touching it. Euler's argument shows that a necessary condition for the walk of the desired form is that the graph be connected and have exactly zero or two nodes of odd degree. This condition turns out also to be sufficient—a result stated by Euler and later proved by Carl Hierholzer. Such a walk is now called an Eulerian trail or Euler walk in his honor. Further, if there are nodes of odd degree, then any Eulerian path will start at one of them and end at the other. Since the graph corresponding to historical Königsberg has four nodes of odd degree, it cannot have an Eulerian path.
An alternative form of the problem asks for a path that traverses all bridges and also has the same starting and ending point. Such a walk is called an Eulerian circuit or an Euler tour. Such a circuit exists if, and only if, the graph is connected and all nodes have even degree. All Eulerian circuits are also Eulerian paths, but not all Eulerian paths are Eulerian circuits.
Euler's work was presented to the St. Petersburg Academy on 26 August 1735, and published as Solutio problematis ad geometriam situs pertinentis (The solution of a problem relating to the geometry of position) in the journal Commentarii academiae scientiarum Petropolitanae in 1741. It is available in English translation in The World of Mathematics by James R. Newman.
Significance in the history and philosophy of mathematics
In the history of mathematics, Euler's solution of the Königsberg bridge problem is considered to be the first theorem of graph theory and the first true proof in the theory of networks, a subject now generally regarded as a branch of combinatorics. Combinatorial problems of other types such as the enumeration of permutations and combinations had been considered since antiquity.
Euler's recognition that the key information was the number of bridges and the list of their endpoints (rather than their exact positions) presaged the development of topology. The difference between the actual layout and the graph schematic is a good example of the idea that topology is not concerned with the rigid shape of objects.
Hence, as Euler recognized, the "geometry of position" is not about "measurements and calculations" but about something more general. That called in question the traditional Aristotelian view that mathematics is the "science of quantity". Though that view fits arithmetic and Euclidean geometry, it did not fit topology and the more abstract structural features studied in modern mathematics.
Philosophers have noted that Euler's proof is not about an abstraction or a model of reality, but directly about the real arrangement of bridges. Hence the certainty of mathematical proof can apply directly to reality. The proof is also explanatory, giving insight into why the result must be true.
Present state of the bridges
Two of the seven original bridges did not survive the bombing of Königsberg in World War II. Two others were later demolished and replaced by a highway. The three other bridges remain, although only two of them are from Euler's time (one was rebuilt in 1935). These changes leave five bridges existing at the same sites that were involved in Euler's problem. In terms of graph theory, two of the nodes now have degree 2, and the other two have degree 3. Therefore, an Eulerian path is now possible, but it must begin on one island and end on the other.
The University of Canterbury in Christchurch has incorporated a model of the bridges into a grass area between the old Physical Sciences Library and the Erskine Building, housing the Departments of Mathematics, Statistics and Computer Science. The rivers are replaced with short bushes and the central island sports a stone tōrō. Rochester Institute of Technology has incorporated the puzzle into the pavement in front of the Gene Polisseni Center, an ice hockey arena that opened in 2014, and the Georgia Institute of Technology also installed a landscape art model of the seven bridges in 2018.
A popular variant of the puzzle is the Bristol Bridges Walk. Like historical Königsberg, Bristol occupies two river banks and two river islands. However, the configuration of the 45 major bridges in Bristol is such that an Eulerian circuit exists. This cycle has been popularized by a book and news coverage and has featured in different charity events.
| Mathematics | Graph theory | null |
192790 | https://en.wikipedia.org/wiki/Stegosaurus | Stegosaurus | Stegosaurus (; ) is a genus of herbivorous, four-legged, armored dinosaur from the Late Jurassic, characterized by the distinctive kite-shaped upright plates along their backs and spikes on their tails. Fossils of the genus have been found in the western United States and in Portugal, where they are found in Kimmeridgian- to Tithonian-aged strata, dating to between 155 and 145 million years ago. Of the species that have been classified in the upper Morrison Formation of the western US, only three are universally recognized: S. stenops, S. ungulatus and S. sulcatus. The remains of over 80 individual animals of this genus have been found. Stegosaurus would have lived alongside dinosaurs such as Apatosaurus, Diplodocus, Camarasaurus and Allosaurus, the latter of which may have preyed on it.
They were large, heavily built, herbivorous quadrupeds with rounded backs, short fore limbs, long hind limbs, and tails held high in the air. Due to their distinctive combination of broad, upright plates and tail tipped with spikes, Stegosaurus is one of the most recognizable kinds of dinosaurs. The function of this array of plates and spikes has been the subject of much speculation among scientists. Today, it is generally agreed that their spiked tails were most likely used for defense against predators, while their plates may have been used primarily for display, and secondarily for thermoregulatory functions. Stegosaurus had a relatively low brain-to-body mass ratio. It had a short neck and a small head, meaning it most likely ate low-lying bushes and shrubs. One species, Stegosaurus ungulatus, is one of the largest known of all the stegosaurians, with the largest known specimens measuring about long and weighing over .
Stegosaurus remains were first identified during the "Bone Wars" by Othniel Charles Marsh at Dinosaur Ridge National Landmark. The first known skeletons were fragmentary and the bones were scattered, and it would be many years before the true appearance of these animals, including their posture and plate arrangement, became well understood. Despite its popularity in books and film, mounted skeletons of Stegosaurus did not become a staple of major natural history museums until the mid-20th century, and many museums have had to assemble composite displays from several different specimens due to a lack of complete skeletons. Stegosaurus is one of the better-known dinosaurs and has been featured in film, on postal stamps, and in many other types of media.
History and naming
Bone Wars and Stegosaurus armatus
Stegosaurus, one of the many dinosaurs described in the Bone Wars, was first collected by Arthur Lakes and consisted of several caudal vertebrae, a dermal plate, and several additional postcranial elements that were collected north of Morrison, Colorado at Lakes' YPM Quarry 5. These first, fragmented bones (YPM 1850) became the holotype of Stegosaurus armatus when Yale paleontologist Othniel Charles Marsh described them in 1877. Marsh initially believed the remains were from an aquatic turtle-like animal, and the basis for its scientific name, 'roof(ed) lizard' was due to his early belief that the plates lay flat over the animal's back, overlapping like the shingles (tiles) on a roof. Though several more complete specimens have been attributed to Stegosaurus armatus, preparation of the bones and analysis has discovered that this type specimen is actually dubious, which is not an ideal situation for the type species of a well-known genus like Stegosaurus. Because of this, the International Commission on Zoological Nomenclature decided to replace the type species with the more well known species Stegosaurus stenops. Marsh also incorrectly referred several fossils to S. armatus, including the dentary and teeth of the sauropod Diplodocus and putting sauropod limb bones and an Allosaurus tibia under YPM 1850.
On the other side of the Bone Wars, Edward Drinker Cope named Hypsirhophus discurus as another stegosaurian based on fragmentary fossils from Cope's Quarry 3 near the "Cope's Nipple" site in Garden Park, Colorado in 1878. Many later researchers have considered Hypsirhophus to be a synonym of Stegosaurus, though Peter Galton (2010) suggested that it is distinct based on differences in the vertebrae. F. F. Hubbell, a collector for Cope, also found a partial Stegosaurus skeleton while digging at Como Bluff in 1877 or '78 that are now part of the Stegosaurus mount (AMNH 5752) at the American Museum of Natural History.
Arthur Lakes made another discovery later in 1879 at Como Bluff in Albany County, Wyoming, the site also dating to the Upper Jurassic of the Morrison Formation, when he found several large Stegosaurus fossils in August of that year. The majority of the fossils came from Quarry 13, including the type specimen of Stegosaurus ungulatus (YPM 1853), which was collected by Lakes and William Harlow Reed the same year and named by Marsh. The specimen was one of many found at the quarry, the specimen consisting of a partial skull, several vertebrae, an ischium, partial limbs, several plates, and four thagomizers, though eight thagomizers were referred based on a specimen preserved alongside the type. The type specimen also preserved the pes, which was the namesake of the species, meaning "hoofed roofed lizard". In 1881, he named a third species Stegosaurus "affinis", based only on a hip bone, though the fossil has since been lost and the species declared a nomen nudum. Later in 1887, Marsh described two more species of Stegosaurus from Como Bluff, Stegosaurus duplex, based on a partial vertebral column, partial pelvis, and partial left hindlimb (YPM 1858) from Reed's Quarry 11, though the species is now seen as synonymous with Stegosaurus ungulatus. The other, Stegosaurus sulcatus, was named based on a left forelimb, scapula, left femur, several vertebrae, and several plates and dermal armor elements (USNM V 4937) collected in 1883. Stegosaurus sulcatus most notably preserves a large spike that has been speculated to have been a shoulder spike that is used to diagnose the species.
The greatest Stegosaurus discovery came in 1885 with the discovery of a nearly complete, articulated skeleton of a subadult that included previously undiscovered elements like a complete skull, throat ossicles, and articulated plates. Marshall P. Felch collected the skeleton throughout 1885 and 1886 from Morrison Formation strata at his quarry in Garden Park, a town near Cañon City, Colorado. The skeleton was expertly unearthed by Felch, who first divided the skeleton into labeled blocks and prepared them separately. The skeleton was shipped to Marsh in 1887, who named it Stegosaurus stenops ( "narrow-faced roof lizard") that year. Though it had not yet been completely prepared, the nearly complete and articulated type specimen of Stegosaurus stenops allowed Marsh to complete the first attempt at a reconstructed Stegosaurus skeleton. This first reconstruction, of S. ungulatus with missing parts filled in from S. stenops, was published by Marsh in 1891. (In 1893, Richard Lydekker mistakenly re-published Marsh's drawing under the label Hypsirhophus.)
Early skeletal mounts and plate interpretation
The skeleton of S. stenops has since been deposited at the National Museum of Natural History in Washington D. C., where it has been on display since 1915. Another mount was made for the NMNH in the form of a mounted composite skeleton consisting of several specimens referred to S. stenops that were collected at Quarry 13 at Como Bluff in 1887, the most complete being USNM 6531. The type specimen of S. ungulatus (YPM 1853) was incorporated into the first ever mounted skeleton of a stegosaur at the Peabody Museum of Natural History in 1910 by Richard Swann Lull. It was initially mounted with paired plates set wide, above the base of the ribs, but was remounted in 1924 with two staggered rows of plates along the midline of the back. Additional specimens recovered from the same quarry by the United States National Museum of Natural History, including tail vertebrae and an additional large plate (USNM 7414), belong to the same individual as YPM 1853.
The next species of Stegosaurus to be named was S. marshi by Frederick Lucas in 1901. Lucas reclassified this species in the new genus Hoplitosaurus later that year. Lucas also re-examined the issue of the life appearance of Stegosaurus, coming to the conclusion that the plates were arranged in pairs in two rows along the back, arranged above the bases of the ribs. Lucas commissioned Charles R. Knight to produce a life restoration of S. ungulatus based on his new interpretation. However, the following year, Lucas wrote that he now believed the plates were probably attached in staggered rows. In 1910, Richard Swann Lull wrote that the alternating pattern seen in S. stenops was probably due to shifting of the skeleton after death. He led the construction of the first ever Stegosaurus skeletal mount at the Peabody Museum of Natural History, which was depicted with paired plates. In 1914, Charles Gilmore argued against Lull's interpretation, noting that several specimens of S. stenops, including the now-completely prepared holotype, preserved the plates in alternating rows near the peak of the back, and that there was no evidence of the plates having shifted relative to the body during fossilization. Gilmore and Lucas' interpretation became the generally accepted standard, and Lull's mount at the Peabody Museum was changed to reflect this in 1924.
Though considered one of the most distinctive types of dinosaur, Stegosaurus displays were missing from a majority of museums during the first half of the 20th century, due largely to the disarticulated nature of most fossil specimens. Until 1918, the only mounted skeleton of Stegosaurus in the world was O. C. Marsh's type specimen of S. ungulatus at the Peabody Museum of Natural History, which was put on display in 1910. However, this mount was dismantled in 1917 when the old Peabody Museum building was demolished. This historically significant specimen was re-mounted ahead of the opening of the new Peabody Museum building in 1925. 1918 saw the completion of the second Stegosaurus mount, and the first depicting S. stenops. This mount was created under the direction of Charles Gilmore at the U.S. National Museum of Natural History. It was a composite of several skeletons, primarily USNM 6531, with proportions designed to closely follow the S. stenops type specimen, which had been on display in relief nearby since 1918. The aging mount was dismantled in 2003 and replaced with a cast in an updated pose in 2004. A third mounted skeleton of Stegosaurus, referred to S. stenops, was put on display at the American Museum of Natural History in 1932. Mounted under the direction of Charles J. Long, the American Museum mount was a composite consisting of partial remains filled in with replicas based on other specimens. In his article about the new mount for the museum's journal, Barnum Brown described (and disputed) the popular misconception that the Stegosaurus had a "second brain" in its hips. Another composite mount, using specimens referred to S. ungulatus collected from Dinosaur National Monument between 1920 and 1922, was put on display at the Carnegie Museum of Natural History in 1940.
Plate arrangement
One of the major subjects of books and articles about Stegosaurus is the plate arrangement. The argument has been a major one in the history of dinosaur reconstruction. Four possible plate arrangements have been proposed over the years:
The plates lie flat along the back, as a shingle-like armor. This was Marsh's initial interpretation, which led to the name 'roof lizard'. As further and complete plates were found, their form showed they stood on edge, rather than lying flat.
By 1891, Marsh published a more familiar view of Stegosaurus, with a single row of plates. This was dropped fairly early on (apparently because it was poorly understood how the plates were embedded in the skin and they were thought to overlap too much in this arrangement). It was revived, in somewhat modified form, in the 1980s, by Stephen Czerkas, based on the arrangement of iguana dorsal spines.
The plates were paired in a double row along the back, such as in Knight's 1901 reconstruction and the 1933 film King Kong.
Two rows of alternating plates. By the early 1960s, this had become (and remains) the prevalent idea, mainly because some S. stenops fossils in which the plates are still partially articulated show this arrangement. This arrangement is chiral and so demands that a specimen be distinguished from its distinct, hypothetical mirror-image form.
Second dinosaur rush
After the end of the Bone Wars, many major institutions in the eastern United States were inspired by the depictions and finds by Marsh and Cope to assemble their own dinosaur fossil collections. The competition was foremost started by the American Museum of Natural History, the Carnegie Museum of Natural History, and the Field Museum of Natural History which all sent expeditions to the west to make their own dinosaur collections and mount skeletons in their fossil halls. The American Museum of Natural History was the first to launch an expedition in 1897, finding several assorted, but incomplete, Stegosaurus specimens at Bone Cabin Quarry in Como Bluff. These remains haven't been described and were mounted in 1932, the mount being a composite primarily of specimens AMNH 650 & 470 from Bone Cabin Quarry. The AMNH mount is cast and on display at the Field Museum, which didn't collect any Stegosaurus skeletons during the Second Dinosaur Rush. The Carnegie Museum in Pittsburgh on the other hand collected many Stegosaurus specimens, first at Freezout Hills in Carbon County, Wyoming in 1902–03. The fossils included only a couple postcranial remains, though in the 1900s-1920s Carnegie crews at Dinosaur National Monument discovered dozens of Stegosaurus specimens in one of the greatest single sites for the taxon. CM 11341, the most complete skeleton found at the quarry, was used for the basis of a composite Stegosaurus mount in 1940 along with several other specimens to finish the mount. A cranium (CM 12000) was also found by Carnegie crews, one of the few known. Both the AMNH and CM material has been referred to Stegosaurus ungulatus.
Resurgent discoveries
As part of the Dinosaur Renaissance and the resurgent interest in dinosaurs by museums and the public, fossils of Stegosaurus were once again being collected, though few have been fully described. An important discovery came in 1937 again at Garden Park by a high school teacher named Frank Kessler in while leading a nature hike. Kessler contacted the Denver Museum of Nature and Science, who sent paleontologist Robert Landberg. Landberg excavated the skeleton with the DMNS crews, recovering a 70% complete Stegosaurus skeleton along with turtles, crocodiles, and isolated dinosaur fossils at the quarry that would be nicknamed "The Kessler Site". Phillip Reinheimer, a steel worker, mounted the Stegosaurus skeleton at the DMNS in 1938. The skeleton remained mounted until 1989 when the museum curator of the DMNS began a revision of the museum's fossil hall and dispatched an expedition to find additional Stegosaurus remains. The expedition was successful in finding a nearly complete Stegosaurus near the Kessler site by Bryan Small, who would become the eponym of the new site. The "Small Quarry" Stegosaurus''' articulation and completeness clarified the position of plates and spikes on the back of Stegosaurus and the position and size of the throat ossicles found earlier first by Felch with the Stegosaurus stenops holotype, though like the S. stenops type, the fossils were flattened in a "roadkill" condition. The Stegosaurus skeletons have been mounted alongside an Allosaurus skeleton collected in Moffat County, Colorado originally in 1979.
1987 saw the discovery of a 40% complete Stegosaurus skeleton in Rabbit Valley in Mesa County, Colorado by Harold Bollan near the Dinosaur Journey Museum. The skeleton was nicknamed the "Bollan Stegosaurus" and is in the collections of the Dinosaur Journey Museum. At Jensen-Jensen Quarry, an articulated torso including several dorsal plates from a small individual were collected and briefly described in 2014, though the specimen was collected years before and is still in preparation at Brigham Young University. 2007 saw the description of a Stegosaurus specimen from the Upper Jurassic Lourinha Formation of Portugal, the specimen was placed as Stegosaurus cf. ungulatus by the describers. The specimen is one of the few associated Stegosaurus skeletons known, though it only contains a tooth, 13 vertebrae, partial limbs, a cervical plate, and several assorted postcranial elements.
Sophie the Stegosaurus is the best preserved Stegosaurus specimen, being 85% intact and containing 360 bones. Sophie was first discovered by Bob Simon in 2003 at a quarry on the Red Canyon Ranch near Shell, Wyoming, and was excavated by crews from the Swiss Sauriermuseum in 2004 and later prepared by museum staff, who gave it the nickname Sarah after the landowner's daughter.Siber, H. J., & Möckli, U. (2009). The stegosaurs of the Sauriermuseum Aathal. The skeleton had been excavated on private land and was available for purchase. The Natural History Museum, London worked with private donors, most notably Jeremy Herrmann, to find the funding and then arranged to purchase the specimen, which was given the new official museum collection specimen designation NHMUK PV R36730 and re-nicknamed Sophie after Jeremy Herrmann's daughter. The mounted skeleton went on display in December 2014 and was scientifically described in 2015. It is a young adult of undetermined sex, 5.8 m (19 ft) long and 2.9 m (9.5 ft) tall. The Sauriermuseum found several partial Stegosaurid skeletons throughout their excavations at Howe Quarry, Wyoming in the 1990s, though only Sophie has been described in detail. One skeleton collected at the site known as "Victoria" is very well preserved including many of the vertebrae preserved in semi-articulation and next to an Allosaurus skeleton found nicknamed "Big Al II".
Description
The quadrupedal Stegosaurus is one of the most easily identifiable dinosaur genera, due to the distinctive double row of kite-shaped plates rising vertically along the rounded back and the two pairs of long spikes extending horizontally near the end of the tail. S. stenops reached in length and in body mass, while S. ungulatus reached in length and in body mass. Some large individuals may have reached in length and in body mass.
Most of the information known about Stegosaurus comes from the remains of mature animals; more recently, though, juvenile remains of Stegosaurus have been found. One subadult specimen, discovered in 1994 in Wyoming, is long and high, and is estimated to have weighed 1.5-2.2 metric tons (1.6-2.4 short tons) while alive. It is on display in the University of Wyoming Geological Museum.
Skull
The long and narrow skull was small in proportion to the body. It had a small antorbital fenestra, the hole between the nose and eye common to most archosaurs, including modern birds, though lost in extant crocodylians. The skull's low position suggests that Stegosaurus may have been a browser of low-growing vegetation. This interpretation is supported by the absence of front teeth and their likely replacement by a horny beak or rhamphotheca. The lower jaw had flat downward and upward extensions that would have completely hidden the teeth when viewed from the side, and these probably supported a turtle-like beak in life. The presence of a beak extended along much of the jaws may have precluded the presence of cheeks in these species. Such an extensive beak was probably unique to Stegosaurus and some other advanced stegosaurids among ornithischians, which usually had beaks restricted to the jaw tips.Barrett, P.M. (2001). Tooth wear and possible jaw action of Scelidosaurus harrisonii Owen and a review of feeding mechanisms in other thyreophoran dinosaurs. Pp. 25-52 in Carpenter, K. (ed.): The Armored Dinosaurs. Bloomington: Indiana University Press. Other researchers have interpreted these ridges as modified versions of similar structures in other ornithischians which might have supported fleshy cheeks, rather than beaks. Stegosaurian teeth were small, triangular, and flat; wear facets show that they did grind their food.
Despite the animal's overall size, the braincase of Stegosaurus was small, being no larger than that of a dog. A well-preserved Stegosaurus braincase allowed Othniel Charles Marsh to obtain, in the 1880s, a cast of the brain cavity or endocast of the animal, which gave an indication of the brain size. The endocast showed the brain was indeed very small, the smallest proportionally of all dinosaur endocasts then known. The fact that an animal weighing over 4.5 metric tons (5 short tons) could have a brain of no more than contributed to the popular old idea that all dinosaurs were unintelligent, an idea now largely rejected. Actual brain anatomy in Stegosaurus is poorly known, but the brain itself was small even for a dinosaur.
Skeleton
In Stegosaurus stenops there are 27 bones in the vertebral column anterior to the sacrum, a varying number of vertebrae in the sacrum, with four in most subadults, and around 46 caudal (tail) vertebrae. The presacrals are divided into cervical (neck) and dorsal (back) vertebrae, with around 10 cervicals and 17 dorsals, the total number being one greater than in Hesperosaurus, two greater than Huayangosaurus, although Miragaia preserves 17 cervicals and an unknown number of dorsals. The first cervical vertebra is the axis bone, which is connected and often fused to the atlas bone. Farther posteriorly, the proportionately larger the cervicals become, although they do not change greatly in anything other than size. Past the first few dorsals, the centrum of the bones become more elongate front-to-back, and the transverse processes become more elevated dorsal. The sacrum of S. stenops includes four sacral vertebrae, but one of the dorsals is also incorporated into the structure. In some specimens of S. stenops, a caudal is also incorporated, as a caudosacral. In Hesperosaurus there are two dorsosacrals, and only four fused sacrals, but in Kentrosaurus there may be as many as seven vertebrae in the sacrum, with both dorsosacrals and caudosacrals. S. stenops preserves 46 caudal vertebrae, and up to 49, and along the series both the centrums and the neural spines become smaller, until the neural spines disappear at caudal 35. Around the middle of the tail, the neural spines become bifurcated, meaning they are divided near the top.
With multiple well-preserved skeletons, S. stenops preserves all regions of the body, including the limbs. The scapula (shoulder blade) is sub-rectangular, with a robust blade. Though it is not always perfectly preserved, the acromion ridge is slightly larger than in Kentrosaurus. The blade is relatively straight, although it curves towards the back. There is a small bump on the back of the blade, that would have served as the base of the triceps muscle. Articulated with the scapula, the coracoid is sub-circular. The hind feet each had three short toes, while each fore foot had five toes; only the inner two toes had a blunt hoof. The phalangeal formula is 2-2-2-2-1, meaning the innermost finger of the fore limb has two bones, the next has two, etc. All four limbs were supported by pads behind the toes. The fore limbs were much shorter than the stocky hind limbs, which resulted in an unusual posture. The tail appears to have been held well clear of the ground, while the head of Stegosaurus was positioned relatively low down, probably no higher than above the ground.
Plates
The most recognizable features of Stegosaurus are its dermal plates, which consisted of between 17 and 22 separate plates and flat spines. These were highly modified osteoderms (bony-cored scales), similar to those seen in crocodiles and many lizards today. They were not directly attached to the animal's skeleton, instead arising from the skin. The largest plates were found over the hips and could measure over wide and tall.
In a 2010 review of Stegosaurus species, Peter Galton suggested that the arrangement of the plates on the back may have varied between species, and that the pattern of plates as viewed in profile may have been important for species recognition. Galton noted that the plates in S. stenops have been found articulated in two staggered rows, rather than paired. Fewer S. ungulatus plates have been found, and none articulated, making the arrangement in this species more difficult to determine. However, the type specimen of S. ungulatus preserves two flattened spine-like plates from the tail that are nearly identical in shape and size, but are mirror images of each other, suggesting that at least these were arranged in pairs. Many of the plates are manifestly chiral and no two plates of the same size and shape have been found for an individual; however plates have been correlated between individuals. Well preserved integumentary impressions of the plates of Hesperosaurus show a smooth surface with long and parallel, shallow grooves. This indicates that the plates were covered in keratinous sheaths.
Classification and species
Like the spikes and shields of ankylosaurs, the bony plates and spines of stegosaurians evolved from the low-keeled osteoderms characteristic of basal thyreophorans. Galton (2019) interpreted plates of an armored dinosaur from the Lower Jurassic (Sinemurian-Pliensbachian) Lower Kota Formation of India as fossils of a member of Ankylosauria; the author argued that this finding indicates a probable early Early Jurassic origin for both Ankylosauria and its sister group Stegosauria.
The vast majority of stegosaurian dinosaurs thus far recovered belong to the Stegosauridae, which lived in the later part of the Jurassic and early Cretaceous, and which were defined by Paul Sereno as all stegosaurians more closely related to Stegosaurus than to Huayangosaurus. This group is widespread, with members across the Northern Hemisphere, Africa and possibly South America. Stegosaurus frequently is discovered in a clade within the Stegosauridae called Stegosaurinae, usually including the taxa Wuerhosaurus and Hesperosaurus. The cladogram below displays the results of the "preferred tree" phylogenetic analysis of Raven et al. (2023), showing the position of the Stegosaurinae within Stegosauria and Eurypoda.
In 2017, Raven and Maidment published a phylogenetic analysis including almost every known stegosaurian genus. Their dataset was expanded upon in the following years with additional taxa. In their 2024 description of stegosaur fossil material from China's Hekou Group, Li et al. used a modified version of the dataset of Raven and Maidment to analyze the phylogenetic relations of the Stegosauria:
Species
Many of the species initially described have since been considered to be invalid or synonymous with earlier named species, leaving two well-known and one poorly known species. Confirmed Stegosaurus remains have been found in the Morrison Formation's stratigraphic zones 2–6, with additional remains possibly referrable to Stegosaurus recovered from stratigraphic zone 1.
Stegosaurus stenops, meaning "narrow-faced roof lizard", was named by Marsh in 1887, with the holotype having been collected by Marshall Felch at Garden Park, north of Cañon City, Colorado, in 1886. This is the best-known species of Stegosaurus, mainly because its remains include at least one complete articulated skeleton. It had proportionately large, broad plates and rounded tail plates. Articulated specimens show that the plates were arranged alternating in a staggered double row. S. stenops is known from at least 50 partial skeletons of adults and juveniles, one complete skull, and four partial skulls. It was shorter than other species, at . Found in the Morrison Formation, Colorado, Wyoming, and Utah.
Stegosaurus ungulatus, meaning "hoofed roof lizard", was named by Marsh in 1879, from remains recovered at Como Bluff, Wyoming (Quarry 12, near Robber's Roost). It might be synonymous with S. stenops. At , it was the longest species within the genus Stegosaurus. A fragmentary Stegosaurus specimen discovered in Portugal and dating from the upper Kimmeridgian-lower Tithonian stage has been tentatively assigned to this species. Stegosaurus ungulatus can be distinguished from S. stenops by the presence of longer hind limbs, proportionately smaller, more pointed plates with wide bases and narrow tips, and by several small, flat, spine-like plates just before the spikes on the tail. These spine-like plates appear to have been paired, due to the presence of at least one pair that are identical but mirrored. S. ungulatus also appears to have had longer legs (femora) and hip bones than other species. The type specimen of S. ungulatus was discovered with eight spikes, though they were scattered away from their original positions. These have often been interpreted as indicating that the animal had four pairs of tail spikes. No specimens have been found with complete or articulated sets of tail spikes, but no additional specimens have been found that preserve eight spikes together. It is possible the extra pair of spikes came from a different individual, and though no other extra bones were found with the specimen, these may be found if more digging were done at the original site. Specimens from other quarries (such as a tail from Quarry 13, now forming part of the composite skeleton AMNH 650 at the American Museum of Natural History), referred to S. ungulatus on the basis of their notched tail vertebrae, are preserved with only four tail spikes. The type specimen of S. ungulatus (YPM 1853) was incorporated into the first ever mounted skeleton of a stegosaur at the Peabody Museum of Natural History in 1910 by Richard Swann Lull. It was initially mounted with paired plates set wide, above the base of the ribs, but was remounted in 1924 with two staggered rows of plates along the midline of the back. Additional specimens recovered from the same quarry by the United States National Museum of Natural History, including tail vertebrae and an additional large plate (USNM 7414), belong to the same individual as YPM 1853.
Stegosaurus sulcatus, meaning "furrowed roof lizard", was described by Marsh in 1887 based on a partial skeleton. It has traditionally been considered a synonym of S. armatus, though more recent studies suggest it is not. S. sulcatus is distinguished mainly by its unusually large, furrowed spikes with very large bases. A spike associated with the type specimen, originally thought to be a tail spike, may in fact come from the shoulder or hip, since its base is much larger than the corresponding tail vertebrae. A review published by Maidment and colleagues in 2008 regarded it as an indeterminate species possibly not even belonging to Stegosaurus at all, but to a different genus. Peter Galton suggested it should be considered a valid species due to its unique spikes.
Susannah Maidment and colleagues in 2008 proposed extensive alterations to the taxonomy of Stegosaurus. They advocated synonymizing S. stenops and S. ungulatus with S. armatus, and sinking Hesperosaurus and Wuerhosaurus into Stegosaurus, with their type species becoming Stegosaurus mjosi and Stegosaurus homheni, respectively. They regarded S. longispinus as dubious. Thus, their conception of Stegosaurus would include three valid species (S. armatus, S. homheni, and S. mjosi) and would range from the Late Jurassic of North America and Europe to the Early Cretaceous of Asia. However, this classification scheme was not followed by other researchers, and a 2017 cladistic analysis co-authored by Maidment with Thomas Raven rejects the synonymy of Hesperosaurus with Stegosaurus. In 2015, Maidment et al. revised their suggestion due to the recognition by Galton of S. armatus as a nomen dubium and its replacement by S. stenops as type species.
In 2024, Li and colleagues described specimen GSAU 201201, a partial skeleton of a stegosaur from the upper Hekou Group of Gansu Province, China (discovered in ), which dates to the Aptian–Albian ages of the Early Cretaceous. The specimen consists of three articulated cervical vertebrae with associated ribs, three dorsal vertebrae, thirteen ribs, a right forelimb including a partial humerus, ulna, and radius, and one dermal plate. Although certain features of the fossil material are different when compared to Wuerhosaurus and Stegosaurus stenops, Li et al. considered the new specimen as Stegosaurus sp. Fossils of the ankylosaur Taohelong were also found in the same layers of the Hekou Group.
Doubtful species and junior synonyms
Stegosaurus armatus, meaning "armored roof lizard", was the first species to be found and the original type species named by O.C. Marsh in 1877. It is known from a partial skeleton, and more than 30 fragmentary specimens have been referred to it. However, the type specimen was very fragmentary, consisting only of a partial tail, hips, and leg, parts of some back vertebrae, and a single fragmentary plate (the presence of which was used to give the animal its name). No other plates or spikes were found, and the entire front half of the animal appears not to have been preserved. Because the type specimen is very fragmentary, it is extremely difficult to compare it with other species based on better specimens, and it is now generally considered to be a nomen dubium. Because of this, it was replaced by S. stenops as the type species of Stegosaurus in a ruling of the ICZN in 2013.
Stegosaurus "affinis", named by Marsh in 1881, is only known from a pubis which has since been lost. Because Marsh did not provide an adequate description of the bone with which to distinguish a new species, this name is considered a nomen nudum.
Diracodon laticeps was described by Marsh in 1881, from some jawbone fragments. Bakker resurrected D. laticeps in 1986 as a senior synonym of S. stenops, although others note that the material is not diagnostic and is only referable to Stegosaurus sp., making it a nomen dubium.
Stegosaurus duplex, meaning "two plexus roof lizard" (in allusion to the greatly enlarged neural canal of the sacrum which Marsh characterized as a "posterior brain case"), was named by Marsh in 1887 (including the holotype specimen). The disarticulated bones were actually collected in 1879 by Edward Ashley at Como Bluff. Marsh initially distinguished it from S. ungulatus based on the fact that each sacral (hip) vertebra bore its own rib, which he claimed was unlike the anatomy of S. ungulatus; however, the sacrum of S. ungulatus had not actually been discovered. Marsh also suggested that S. duplex may have lacked armor, since no plates or spikes were found with the specimen, though a single spike may actually have been present nearby, and re-examination of the site maps has shown that the entire specimen was found highly disarticulated and scattered. It is generally considered a synonym of S. ungulatus today, and parts of the specimen were actually incorporated into the Peabody Museum S. ungulatus skeletal mount in 1910.
Reassigned species
Stegosaurus marshi, which was described by Lucas in 1901, was renamed Hoplitosaurus in 1902.
Stegosaurus priscus, described by Nopcsa in 1911, was reassigned to Lexovisaurus, and is now the type species of Loricatosaurus.
Stegosaurus longispinus was named by Charles W. Gilmore in 1914 based on a fragmentary postcranial skeleton that has largely been lost. It is now the type species of the genus Alcovasaurus, though it has been referred to Miragaia.
Stegosaurus madagascariensis from Madagascar is known solely from teeth and was described by Piveteau in 1926. The teeth were variously attributed to a stegosaur, the theropod Majungasaurus, a hadrosaur or even a crocodylian, but is now considered a possible ankylosaur.
Stegosaurus homheni is an alternative combination for the Chinese Cretaceous stegosaur Wuerhosaurus homheni, which was described based on a partial postcranial skeleton in 1973 by Dong Zhiming. It was referred to Stegosaurus in 2008 by Maidment et al, but some still consider the species to be in its own genus.Xing, L., Lockley, M. G., PERSONS IV, W. S., Klein, H., Romilio, A., Wang, D., & Wang, M. (2021). Stegosaur track assemblage from Xinjiang, China, featuring the smallest known stegosaur record. Palaios, 36(2), 68-76.
Stegosaurus mjosi was described as Hesperosaurus mjosi by Carpenter et al in 2001 based on a partial skull and incomplete postcranial skeleton from the Morrison Formation of Johnson County, Wyoming. The species was referred to Stegosaurus mostly by Maidment et al starting in 2008, but Hesperosaurus has been the more popular combination since the discovery of more remains.
Paleobiology
Posture and movement
Soon after its discovery, Marsh considered Stegosaurus to have been bipedal, due to its short forelimbs. He had changed his mind, however, by 1891, after considering the heavy build of the animal.
Although Stegosaurus is undoubtedly now considered to have been quadrupedal, some discussion has occurred over whether it could have reared up on its hind legs, using its tail to form a tripod with its hind limbs, to browse for higher foliage. This has been proposed by Bakker and opposed by Carpenter. A study by Mallison (2010) found support for a rearing up posture in Kentrosaurus, though not for ability for the tail to act as a tripod.Stegosaurus had short fore limbs in relation to its hind limbs. Furthermore, within the hind limbs, the lower section (comprising the tibia and fibula) was short compared with the femur. This suggests it could not walk very fast, as the stride of the back legs at speed would have overtaken the front legs, giving a maximum speed of . Tracks discovered by Matthew Mossbrucker (Morrison Natural History Museum, Colorado) suggest that Stegosaurus lived and traveled in multiple-age herds. One group of tracks is interpreted as showing four or five baby stegosaurs moving in the same direction, while another has a juvenile stegosaur track with an adult track overprinting it.
As the plates would have been obstacles during copulation, it is possible the female stegosaur laid on her side as the male entered her from above and behind. Another suggestion is that the female would stand on all fours but squat down the fore limbs and raise the tail up and out of the male's way as he supports his fore limbs on her hips. However, their reproductive organs still could not touch as there is no evidence of muscle attachments for a mobile penis nor a baculum in male dinosaurs.
Plate function
The function of Stegosaurus' plates has been much debated. Marsh suggested that they functioned as some form of armor, though Davitashvili (1961) disputed this, claiming that they were too fragile and ill-placed for defensive purposes, leaving the animal's sides unprotected. Nevertheless, others have continued to support a defensive function. Bakker suggested in 1986 that the plates were covered in horn comparing the surface of the fossilized plates to the bony cores of horns in other animals known or thought to bear horns. Christiansen and Tschopp (2010), having studied a well-preserved specimen of Hesperosaurus with skin impressions, concluded that the plates were covered in a keratin sheath which would have strengthened the plate as a whole and provided it with sharp cutting edges. Bakker stated that Stegosaurus could flip its osteoderms from one side to another to present a predator with an array of spikes and blades that would impede it from closing sufficiently to attack the Stegosaurus effectively. He contends that they had insufficient width for them to stand erect easily in such a manner as to be useful in display without continuous muscular effort. Mobility of the plates, however, has been disputed by other paleontologists.
Another possible function of the plates is they may have helped to control the body temperature of the animal, in a similar way to the sails of the pelycosaurs Dimetrodon and Edaphosaurus (and modern elephant and rabbit ears). The plates had blood vessels running through grooves and air flowing around the plates would have cooled the blood. Buffrénil, et al. (1986) found "extreme vascularization of the outer layer of bone", which was seen as evidence that the plates "acted as thermoregulatory devices". Likewise, 2010 structural comparisons of Stegosaurus plates to Alligator osteoderms seem to support the conclusion that the potential for a thermoregulatory role in the plates of Stegosaurus definitely exists.
The thermoregulation hypothesis has been seriously questioned, since other stegosaurs such as Kentrosaurus, had more low surface area spikes than plates, implying that cooling was not important enough to require specialized structural formations such as plates. However, it has also been suggested that the plates could have helped the animal increase heat absorption from the sun. Since a cooling trend occurred towards the end of the Jurassic, a large ectothermic reptile might have used the increased surface area afforded by the plates to absorb radiation from the sun. Christiansen and Tschopp (2010) state that the presence of a smooth, insulating keratin covering would have hampered thermoregulation, but such a function cannot be entirely ruled out as extant cattle and ducks use horns and beaks to dump excess heat despite the keratin covering. Histological surveys of plate microstructure attributed the vascularization to the need to transport nutrients for rapid plate growth.
The vascular system of the plates have been theorized to have played a role in threat displaying as Stegosaurus could have pumped blood into them, causing them to "blush" and give a colorful, red warning. However, the stegosaur plates were covered in horn rather than skin. The plates' large size suggests that they may have served to increase the apparent height of the animal, either to intimidate enemies or to impress other members of the same species in some form of sexual display. A 2015 study of the shapes and sizes of Hesperosaurus plates suggested that they were sexually dimorphic, with wide plates belonging to males and taller plates belonging to females. Christiansen and Tschopp (2010) proposed that the display function would have been reinforced by the horny sheath which would have increased the visible surface and such horn structures are often brightly colored. Some have suggested that plates in stegosaurs were used to allow individuals to identify members of their species. The use of exaggerated structures in dinosaurs as species identification has been questioned, as no such function exists in modern species.
Thagomizer (tail spikes)
There has been debate about whether the tail spikes were only used for display, as posited by Gilmore in 1914, or used as a weapon. Robert Bakker noted the tail was likely to have been much more flexible than that of other dinosaurs, as it lacked ossified tendons, thus lending credence to the idea of the tail as a weapon. However, as Carpenter has noted, the plates overlap so many tail vertebrae, movement would be limited. Bakker also observed that Stegosaurus could have maneuvered its rear easily, by keeping its large hind limbs stationary and pushing off with its very powerfully muscled but short forelimbs, allowing it to swivel deftly to deal with attack.
More recently, a study of the tail spikes by McWhinney et al., which showed a high incidence of trauma-related damage, lends more weight to the position that the spikes were indeed used in combat. This study showed that 9.8% of Stegosaurus specimens examined had injuries to their tail spikes. Additional support for this idea was a punctured tail vertebra of an Allosaurus into which a tail spike fits perfectly. The damage shows that the spike entered at an angle from below and displaced a piece of the process upward, remodeled bone on the underside of the process shows that an infection developed.S. stenops had four dermal spikes, each about long. Discoveries of articulated stegosaur armor show, at least in some species, these spikes protruded horizontally from the tail, not vertically as is often depicted. Initially, Marsh described S. ungulatus as having eight spikes in its tail, unlike S. stenops. However, recent research re-examined this and concluded this species also had four.
Growth and metabolism
Juveniles of Stegosaurus have been preserved, probably showing the growth of the genus. The two juveniles are both relatively small, with the smaller individual being long, and the larger having a length of . The specimens can be identified as not mature because they lack the fusion of the scapula and coracoid, and the lower hind limbs. Also, the pelvic region of the specimens are similar to Kentrosaurus juveniles. One 2009 study of Stegosaurus specimens of various sizes found that the plates and spikes had slower histological growth than the skeleton at least until the dinosaur reached its mature size.
A 2013 study concluded, based on the rapid deposition of highly vascularised fibrolamellar bone, that Kentrosaurus had a quicker growth rate than Stegosaurus, contradicting the general rule that larger dinosaurs grew faster than smaller ones.
A 2022 study by Wiemann and colleagues of various dinosaur genera including Stegosaurus suggests that it had an ectothermic (cold blooded) or gigantothermic metabolism, on par with that of modern reptiles. This was uncovered using the spectroscopy of lipoxidation signals, which are byproducts of oxidative phosphorylation and correlate with metabolic rates. They suggested that such metabolisms may have been common for ornithischian dinosaurs in general, with the group evolving towards ectothermy from an ancestor with an endothermic (warm blooded) metabolism.
DietStegosaurus and related genera were herbivores. However, their teeth and jaws are very different from those of other herbivorous ornithischian dinosaurs, suggesting a different feeding strategy that is not yet well understood. The other ornithischians possessed teeth capable of grinding plant material and a jaw structure capable of movements in planes other than simply orthal (i.e. not only the fused up-down motion to which stegosaur jaws were likely limited). Unlike the sturdy jaws and grinding teeth common to its fellow ornithischians, Stegosaurus (and all stegosaurians) had small, peg-shaped teeth that have been observed with horizontal wear facets associated with tooth-food contact and their unusual jaws were probably capable of only orthal (up-down) movements. Their teeth were "not tightly pressed together in a block for efficient grinding", and no evidence in the fossil record of stegosaurians indicates use of gastroliths—the stone(s) some dinosaurs (and some present-day bird species) ingested—to aid the grinding process, so how exactly Stegosaurus obtained and processed the amount of plant material required to sustain its size remains "poorly understood".
The stegosaurians were widely distributed geographically in the late Jurassic. Palaeontologists believe it would have eaten plants such as mosses, ferns, horsetails, cycads, and conifers. One hypothesized feeding behavior strategy considers them to be low-level browsers, eating low-growing foliage of various nonflowering plants. This scenario has Stegosaurus foraging at most 1 m above the ground. Conversely, if Stegosaurus could have raised itself on two legs, as suggested by Bakker, then it could have browsed on vegetation quite high up, with adults being able to forage up to above the ground.
A detailed computer analysis of the biomechanics of Stegosauruss feeding behavior was performed in 2010, using two different three-dimensional models of Stegosaurus teeth given realistic physics and properties. Bite force was also calculated using these models and the known skull proportions of the animal, as well as simulated tree branches of different size and hardness. The resultant bite forces calculated for Stegosaurus were 140.1 newtons (N), 183.7 N, and 275 N (for anterior, middle and posterior teeth, respectively), which means its bite force was less than half that of a Labrador retriever. Stegosaurus could have easily bitten through smaller green branches, but would have had difficulty with anything over 12 mm in diameter. Stegosaurus, therefore, probably browsed primarily among smaller twigs and foliage, and would have been unable to handle larger plant parts unless the animal was capable of biting much more efficiently than predicted in this study. However, a 2016 study indicates that Stegosaurus bite strength was stronger than previously believed. Comparisons were made between it (represented by a specimen known as "Sophie" from the United Kingdom's Natural History Museum) and two other herbivorous dinosaurs; Erlikosaurus and Plateosaurus to determine if all three had similar bite forces and similar niches. Based on the results of the study, it was revealed that the subadult Stegosaurus specimen had a bite similar in strength to that of modern herbivorous mammals, in particular, cattle and sheep. Based on this data, it is likely Stegosaurus also ate woodier, tougher plants such as cycads, perhaps even acting as a means of spreading cycad seeds.
"Second brain"
At one time, stegosaurs were described as having a "second brain" in their hips. Soon after describing Stegosaurus, Marsh noted a large canal in the hip region of the spinal cord, which could have accommodated a structure up to 20 times larger than the famously small brain. This has led to the influential idea that dinosaurs like Stegosaurus had a "second brain" in the tail, which may have been responsible for controlling reflexes in the rear portion of the body. This "brain" was proposed to have given a Stegosaurus a temporary boost when it was under threat from predators.
This space, however, is more likely to have served other purposes. The sacro-lumbar expansion is not unique to stegosaurs, nor even ornithischians. It is also present in birds. In their case, it contains what is called the glycogen body, a structure whose function is not definitely known, but which is postulated to facilitate the supply of glycogen to the animal's nervous system. It also may function as a balance organ, or reservoir of compounds to support the nervous system.
Paleoecology
The Morrison Formation is interpreted as a semiarid environment with distinct wet and dry seasons, and flat floodplains. Vegetation varied from river-lining forests of conifers, tree ferns, and ferns (gallery forests), to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, ferns, cycads, ginkoes, and several families of conifers. Animal fossils discovered include bivalves, snails, ray-finned fishes, frogs, salamanders, turtles like Glyptops, sphenodonts, lizards, terrestrial and aquatic crocodylomorphs like Hoplosuchus, several species of pterosaurs such as Harpactognathus and Mesadactylus, numerous dinosaur species, and early mammals such as docodonts (like Docodon), multituberculates, symmetrodonts, and triconodonts.
Dinosaurs that lived alongside Stegosaurus included theropods Allosaurus, Saurophaganax, Torvosaurus, Ceratosaurus, Marshosaurus, Stokesosaurus, Ornitholestes, Coelurus and Tanycolagreus. Sauropods dominated the region, and included Brontosaurus, Brachiosaurus, Apatosaurus, Diplodocus, Camarasaurus, and Barosaurus. Other ornithischians included Camptosaurus, Gargoyleosaurus, Dryosaurus, and Nanosaurus. Stegosaurus is commonly found at the same sites as Allosaurus, Apatosaurus, Camarasaurus, and Diplodocus. Stegosaurus may have preferred drier settings than these other dinosaurs.
Cultural significance
One of the most recognizable of all dinosaurs, Stegosaurus has been depicted on film, in cartoons and comics and as children's toys. Due to the fragmentary nature of most early Stegosaurus fossil finds, it took many years before reasonably accurate restorations of this dinosaur could be produced. The earliest popular image of Stegosaurus was an engraving produced by the French science illustrator Auguste-Michel Jobin, which appeared in the November 1884 issue of Scientific American and elsewhere, and which depicted the dinosaur amid a speculative Morrison age Jurassic landscape. Jobin restored the Stegosaurus as bipedal and long-necked, with the plates arranged along the tail and the back covered in spikes. This covering of spikes might have been based on a misinterpretation of the teeth, which Marsh had noted were oddly shaped, cylindrical, and found scattered, such that he thought they might turn out to be small dermal spines.
Marsh published his more accurate skeletal reconstruction of Stegosaurus in 1891, and within a decade Stegosaurus had become among the most-illustrated types of dinosaur. Artist Charles R. Knight published his first illustration of Stegosaurus ungulatus based on Marsh's skeletal reconstruction in a November 1897 issue of The Century Magazine. This illustration would later go on to form the basis of the stop-motion puppet used in the 1933 film King Kong. Like Marsh's reconstruction, Knight's first restoration had a single row of large plates, though he next used a double row for his more well-known 1901 painting, produced under the direction of Frederic Lucas. Again under Lucas, Knight revised his version of Stegosaurus again two years later, producing a model with a staggered double row of plates. Knight would go on to paint a stegosaur with a staggered double plate row in 1927 for the Field Museum of Natural History, and was followed by Rudolph F. Zallinger, who painted Stegosaurus this way in his "Age of Reptiles" mural at the Peabody Museum in 1947.Stegosaurus made its major public debut as a paper mache model commissioned by the U.S. National Museum of Natural History for the 1904 Louisiana Purchase Exposition. The model was based on Knight's latest miniature with the double row of staggered plates, and was exhibited in the United States Government Building at the exposition in St. Louis before being relocated to Portland, Oregon for the Lewis and Clark Centennial Exposition in 1905. The model was moved to the Smithsonian National Museum of Natural History (now the Arts and Industries Building) in Washington, D.C. along with other prehistory displays, and to the current National Museum of Natural History building in 1911. Following renovations to the museum in the 2010s, the model was moved once again for display at the Museum of the Earth in Ithaca, New York.
On July 17, 2024, a large Stegosaurus skeleton, "Apex", fetched $44.6m (£34m) at a Sotheby's auction in New York City - the most ever paid for a fossil.
| Biology and health sciences | Dinosaurs and prehistoric reptiles | null |
192904 | https://en.wikipedia.org/wiki/Ultimate%20fate%20of%20the%20universe | Ultimate fate of the universe | The ultimate fate of the universe is a topic in physical cosmology, whose theoretical restrictions allow possible scenarios for the evolution and ultimate fate of the universe to be described and evaluated. Based on available observational evidence, deciding the fate and evolution of the universe has become a valid cosmological question, being beyond the mostly untestable constraints of mythological or theological beliefs. Several possible futures have been predicted by different scientific hypotheses, including that the universe might have existed for a finite and infinite duration, or towards explaining the manner and circumstances of its beginning.
Observations made by Edwin Hubble during the 1930s–1950s found that galaxies appeared to be moving away from each other, leading to the currently accepted Big Bang theory. This suggests that the universe began very dense about 13.787 billion years ago, and it has expanded and (on average) become less dense ever since. Confirmation of the Big Bang mostly depends on knowing the rate of expansion, average density of matter, and the physical properties of the mass–energy in the universe.
There is a strong consensus among cosmologists that the shape of the universe is considered "flat" (parallel lines stay parallel) and will continue to expand forever.
Factors that need to be considered in determining the universe's origin and ultimate fate include the average motions of galaxies, the shape and structure of the universe, and the amount of dark matter and dark energy that the universe contains.
Emerging scientific basis
Theory
The theoretical scientific exploration of the ultimate fate of the universe became possible with Albert Einstein's 1915 theory of general relativity. General relativity can be employed to describe the universe on the largest possible scale. There are several possible solutions to the equations of general relativity, and each solution implies a possible ultimate fate of the universe.
Alexander Friedmann proposed several solutions in 1922, as did Georges Lemaître in 1927. In some of these solutions, the universe has been expanding from an initial singularity which was, essentially, the Big Bang.
Observation
In 1929, Edwin Hubble published his conclusion, based on his observations of Cepheid variable stars in distant galaxies, that the universe was expanding. From then on, the beginning of the universe and its possible end have been the subjects of serious scientific investigation.
Big Bang and Steady State theories
In 1927, Georges Lemaître set out a theory that has since come to be called the Big Bang theory of the origin of the universe. In 1948, Fred Hoyle set out his opposing Steady State theory in which the universe continually expanded but remained statistically unchanged as new matter is constantly created. These two theories were active contenders until the 1965 discovery, by Arno Allan Penzias and Robert Woodrow Wilson, of the cosmic microwave background radiation, a fact that is a straightforward prediction of the Big Bang theory, and one that the original Steady State theory could not account for. As a result, the Big Bang theory quickly became the most widely held view of the origin of the universe.
Cosmological constant
Einstein and his contemporaries believed in a static universe. When Einstein found that his general relativity equations could easily be solved in such a way as to allow the universe to be expanding at the present and contracting in the far future, he added to those equations what he called a cosmological constant — essentially a constant energy density, unaffected by any expansion or contraction — whose role was to offset the effect of gravity on the universe as a whole in such a way that the universe would remain static. However, after Hubble announced his conclusion that the universe was expanding, Einstein would write that his cosmological constant was "the greatest blunder of my life."
Density parameter
An important parameter in fate of the universe theory is the density parameter, omega (), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether is equal to, less than, or greater than . These are called, respectively, the flat, open and closed universes. These three adjectives refer to the overall geometry of the universe, and not to the local curving of spacetime caused by smaller clumps of mass (for example, galaxies and stars). If the primary content of the universe is inert matter, as in the dust models popular for much of the 20th century, there is a particular fate corresponding to each geometry. Hence cosmologists aimed to determine the fate of the universe by measuring , or equivalently the rate at which the expansion was decelerating.
Repulsive force
Starting in 1998, observations of supernovas in distant galaxies have been interpreted as consistent with a universe whose expansion is accelerating. Subsequent cosmological theorizing has been designed so as to allow for this possible acceleration, nearly always by invoking dark energy, which in its simplest form is just a positive cosmological constant. In general, dark energy is a catch-all term for any hypothesized field with negative pressure, usually with a density that changes as the universe expands. Some cosmologists are studying whether dark energy which varies in time (due to a portion of it being caused by a scalar field in the early universe) can solve the crisis in cosmology. Upcoming galaxy surveys from the Euclid, Nancy Grace Roman and James Webb space telescopes (and data from next-generation ground-based telescopes) are expected to further develop our understanding of dark energy (specifically whether it is best understood as a constant energy intrinsic to space, as a time varying quantum field or as something else entirely).
Role of the shape of the universe
The current scientific consensus of most cosmologists is that the ultimate fate of the universe depends on its overall shape, how much dark energy it contains and on the equation of state which determines how the dark energy density responds to the expansion of the universe. Recent observations conclude, from 7.5 billion years after the Big Bang, that the expansion rate of the universe has probably been increasing, commensurate with the Open Universe theory. However, measurements made by the Wilkinson Microwave Anisotropy Probe suggest that the universe is either flat or very close to flat.
Closed universe
If , the geometry of space is closed like the surface of a sphere. The sum of the angles of a triangle exceeds 180 degrees and there are no parallel lines; all lines eventually meet. The geometry of the universe is, at least on a very large scale, elliptic.
In a closed universe, gravity eventually stops the expansion of the universe, after which it starts to contract until all matter in the universe collapses to a point, a final singularity termed the "Big Crunch", the opposite of the Big Bang. If, however, the universe contains dark energy, then the resulting repulsive force may be sufficient to cause the expansion of the universe to continue forever—even if . This is the case in the currently accepted Lambda-CDM model, where dark energy is found through observations to account for roughly 68% of the total energy content of the universe. According to the Lambda-CDM model, the universe would need to have an average matter density roughly seventeen times greater than its measured value today in order for the effects of dark energy to be overcome and the universe to eventually collapse. This is in spite of the fact that, according to the Lambda-CDM model, any increase in matter density would result in .
Open universe
If , the geometry of space is open, i.e., negatively curved like the surface of a saddle. The angles of a triangle sum to less than 180 degrees, and lines that do not meet are never equidistant; they have a point of least distance and otherwise grow apart. The geometry of such a universe is hyperbolic.
Even without dark energy, a negatively curved universe expands forever, with gravity negligibly slowing the rate of expansion. With dark energy, the expansion not only continues but accelerates. The ultimate fate of an open universe with dark energy is either universal heat death or a "Big Rip" where the acceleration caused by dark energy eventually becomes so strong that it completely overwhelms the effects of the gravitational, electromagnetic and strong binding forces. Conversely, a negative cosmological constant, which would correspond to a negative energy density and positive pressure, would cause even an open universe to re-collapse to a big crunch.
Flat universe
If the average density of the universe exactly equals the critical density so that , then the geometry of the universe is flat: as in Euclidean geometry, the sum of the angles of a triangle is 180 degrees and parallel lines continuously maintain the same distance. Measurements from the Wilkinson Microwave Anisotropy Probe have confirmed the universe is flat within a 0.4% margin of error.
In the absence of dark energy, a flat universe expands forever but at a continually decelerating rate, with expansion asymptotically approaching zero. With dark energy, the expansion rate of the universe initially slows, due to the effects of gravity, but eventually increases, and the ultimate fate of the universe becomes the same as that of an open universe.
Theories about the end of the universe
The fate of the universe may be determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below. However, observations are not conclusive, and alternative models are still possible.
Big Freeze or Heat Death
The heat death of the universe, also known as the Big Freeze (or Big Chill), is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature. Under this scenario, the universe eventually reaches a state of maximum entropy in which everything is evenly distributed and there are no energy gradients—which are needed to sustain information processing, one form of which is life. This scenario has gained ground as the most likely fate.
In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, but they will disappear over time as they emit Hawking radiation. Over infinite time, there could be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations, and the fluctuation theorem.
The heat death scenario is compatible with any of the three spatial models, but it requires that the universe reaches an eventual temperature minimum. Without dark energy, it could occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe.
Big Rip
The current Hubble constant defines a rate of acceleration of the universe not large enough to destroy local structures like galaxies, which are held together by gravity, but large enough to increase the space between them. A steady increase in the Hubble constant to infinity would result in all material objects in the universe, starting with galaxies and eventually (in a finite time) all forms, no matter how small, disintegrating into unbound elementary particles, radiation and beyond. As the energy density, scale factor and expansion rate become infinite, the universe ends as what is effectively a singularity.
In the special case of phantom dark energy, which has supposed negative kinetic energy that would result in a higher rate of acceleration than other cosmological constants predict, a more sudden big rip could occur.
Big Crunch
The Big Crunch hypothesis is a symmetric view of the ultimate fate of the universe. Just as the theorized Big Bang started as a cosmological expansion, this theory assumes that the average density of the universe will be enough to stop its expansion and the universe will begin contracting. The result is unknown; a simple estimation would have all the matter and spacetime in the universe collapse into a dimensionless singularity back into how the universe started with the Big Bang, but at these scales unknown quantum effects need to be considered (see Quantum gravity). Recent evidence suggests that this scenario is unlikely but has not been ruled out, as measurements have been available only over a relatively short period of time and could reverse in the future.
This scenario allows the Big Bang to occur immediately after the Big Crunch of a preceding universe. If this happens repeatedly, it creates a cyclic model, which is also known as an oscillatory universe. The universe could then consist of an infinite sequence of finite universes, with each finite universe ending with a Big Crunch that is also the Big Bang of the next universe. A problem with the cyclic universe is that it does not reconcile with the second law of thermodynamics, as entropy would build up from oscillation to oscillation and cause the eventual heat death of the universe. Current evidence also indicates the universe is not closed. This has caused cosmologists to abandon the oscillating universe model. A somewhat similar idea is embraced by the cyclic model, but this idea evades heat death because of an expansion of the branes that dilutes entropy accumulated in the previous cycle.
Big Bounce
The Big Bounce is a theorized scientific model related to the beginning of the known universe. It derives from the oscillatory universe or cyclic repetition interpretation of the Big Bang where the first cosmological event was the result of the collapse of a previous universe.
According to one version of the Big Bang theory of cosmology, in the beginning the universe was infinitely dense. Such a description seems to be at odds with other more widely accepted theories, especially quantum mechanics and its uncertainty principle. Therefore, quantum mechanics has given rise to an alternative version of the Big Bang theory, specifically that the universe tunneled into existence and had a finite density consistent with quantum mechanics, before evolving in a manner governed by classical physics. Also, if the universe is closed, this theory would predict that once this universe collapses it will spawn another universe in an event similar to the Big Bang after a universal singularity is reached or a repulsive quantum force causes re-expansion.
In simple terms, this theory states that the universe will continuously repeat the cycle of a Big Bang, followed by a Big Crunch.
Cosmic uncertainty
Each possibility described so far is based on a simple form for the dark energy equation of state. However, as the name is meant to imply, little is now known about the physics of dark energy. If the theory of inflation is true, the universe went through an episode dominated by a different form of dark energy in the first moments of the Big Bang, but inflation ended, indicating an equation of state more complex than those assumed for present-day dark energy. It is possible that the dark energy equation of state could change again, resulting in an event that would have consequences which are difficult to predict or parameterize. As the nature of dark energy and dark matter remain enigmatic, even hypothetical, the possibilities surrounding their coming role in the universe are unknown.
Other possible fates of the universe
There are also some possible events, such as the Big Slurp, which would seriously harm the universe, although the universe as a whole would not be completely destroyed as a result.
Big Slurp
This theory posits that the universe currently exists in a false vacuum and that it could become a true vacuum at any moment.
In order to best understand the false vacuum collapse theory, one must first understand the Higgs field which permeates the universe. Much like an electromagnetic field, it varies in strength based upon its potential. A true vacuum exists so long as the universe exists in its lowest energy state, in which case the false vacuum theory is irrelevant. However, if the vacuum is not in its lowest energy state (a false vacuum), it could tunnel into a lower-energy state. This is called vacuum decay. This has the potential to fundamentally alter the universe: in some scenarios, even the various physical constants could have different values, severely affecting the foundations of matter, energy, and spacetime. It is also possible that all structures will be destroyed instantaneously, without any forewarning.
However, only a portion of the universe would be destroyed by the Big Slurp while most of the universe would still be unaffected because galaxies located further than 4,200 megaparsecs (13 billion light-years) away from each other are moving away from each other faster than the speed of light while the Big Slurp itself cannot expand faster than the speed of light. To place this in context, the size of the observable universe is currently about 46 billion light years in all directions from earth. The universe is thought to be that size or larger.
Observational constraints on theories
Choosing among these rival scenarios is done by 'weighing' the universe, for example, measuring the relative contributions of matter, radiation, dark matter, and dark energy to the critical density. More concretely, competing scenarios are evaluated against data on galaxy clustering and distant supernovas, and on the anisotropies in the cosmic microwave background.
| Physical sciences | Physical cosmology | null |
192989 | https://en.wikipedia.org/wiki/Big%20Rip | Big Rip | In physical cosmology, the Big Rip is a hypothetical cosmological model concerning the ultimate fate of the universe, in which the matter of the universe, from stars and galaxies to atoms and subatomic particles, and even spacetime itself, is progressively torn apart by the expansion of the universe at a certain time in the future, until distances between particles will infinitely increase.
According to the standard model of cosmology, the scale factor of the universe is accelerating, and, in the future era of cosmological constant dominance, will increase exponentially. However, this expansion is similar for every moment of time (hence the exponential law – the expansion of a local volume is the same number of times over the same time interval), and is characterized by an unchanging, small Hubble constant, effectively ignored by any bound material structures. By contrast, in the Big Rip scenario the Hubble constant increases to infinity in a finite time. According to recent studies, the universe is currently set for a constant expansion and heat death, because the equation of state parameter w = −1.
The possibility of sudden rip singularity occurs only for hypothetical matter (phantom energy) with implausible physical properties.
Overview
The truth of the hypothesis relies on the type of dark energy present in our universe. The type that could prove this hypothesis is a constantly increasing form of dark energy, known as phantom energy. If the dark energy in the universe increases without limit, it could overcome all forces that hold the universe together. The key value is the equation of state parameter w, the ratio between the dark energy pressure and its energy density. If −1 < w < 0, the expansion of the universe tends to accelerate, but the dark energy tends to dissipate over time, and the Big Rip does not happen. Phantom energy has w < −1, which means that its density increases as the universe expands.
A universe dominated by phantom energy is an accelerating universe, expanding at an ever-increasing rate. However, this implies that the size of the observable universe and the cosmological event horizon is continually shrinking – the distance at which objects can influence an observer becomes ever closer, and the distance over which interactions can propagate becomes ever shorter. When the size of the horizon becomes smaller than any particular structure, no interaction by any of the fundamental forces can occur between the most remote parts of the structure, and the structure is "ripped apart". The progression of time itself will stop. The model implies that after a finite time there will be a final singularity, called the "Big Rip", in which the observable universe eventually reaches zero size and all distances diverge to infinite values.
The authors of this hypothesis, led by Robert R. Caldwell of Dartmouth College, calculate the time from the present to the Big Rip to be
where w is defined above, H0 is Hubble's constant and Ωm is the present value of the density of all the matter in the universe.
Observations of galaxy cluster speeds by the Chandra X-ray Observatory seem to suggest the value of w is between approximately −0.907 and −1.075, meaning the Big Rip cannot be definitively ruled out. Based on the above equation, if the observation determines that the value of w is less than −1, but greater than or equal to −1.075, the Big Rip would occur approximately 152 billion years into the future at the earliest. More recent data from Planck mission indicates the value of w to be −1.028 (±0.031), pushing the earliest possible time of Big Rip to be approximately 200 billion years into the future.
Authors' example
In their paper, the authors consider a hypothetical example with w = −1.5, H0 = 70 km/s/Mpc, and Ωm = 0.3, in which case the Big Rip would happen approximately 22 billion years from the present. In this scenario, galaxies would first be separated from each other about 200 million years before the Big Rip. About 60 million years before the Big Rip, galaxies would begin to disintegrate as gravity becomes too weak to hold them together. Planetary systems like the Solar System would become gravitationally unbound about three months before the Big Rip, and planets would fly off into the rapidly expanding universe. In the last minutes, stars and planets would be torn apart, and the now-dispersed atoms would be destroyed about 10−19 seconds before the end (the atoms will first be ionized as electrons fly off, followed by the dissociation of the atomic nuclei). At the time the Big Rip occurs, even spacetime itself would be ripped apart and the scale factor would be infinity.
Observed universe
Evidence indicates w to be very close to −1 in our universe, which makes w the dominating term in the equation. The closer that w is to −1, the closer the denominator is to zero and the further the Big Rip is in the future. If w were exactly equal to −1, the Big Rip could not happen, regardless of the values of H0 or Ωm.
According to the latest cosmological data available, the uncertainties are still too large to discriminate among the three cases w < −1, w = −1, and w > −1.
Moreover, it is nearly impossible to measure w to be exactly at −1 due to statistical fluctuations. This means that the measured value of w can be arbitrarily close to −1 but not exactly at −1 hence the earliest possible date of the Big Rip can be pushed back further with more accurate measurements but the Big Rip is very difficult to completely rule out.
| Physical sciences | Physical cosmology | Astronomy |
11250054 | https://en.wikipedia.org/wiki/Strongylocentrotus%20purpuratus | Strongylocentrotus purpuratus | Strongylocentrotus purpuratus is a species of sea urchin in the family Strongylocentrotidae commonly known as the purple sea urchin. It lives along the eastern edge of the Pacific Ocean extending from Ensenada, Mexico, to British Columbia, Canada. This sea urchin species is deep purple in color, and lives in lower inter-tidal and nearshore sub-tidal communities. Its eggs are orange when secreted in water. January, February, and March function as the typical active reproductive months for the species. Sexual maturity is reached around two years. It normally grows to a diameter of about 10 cm (4 inches) and may live as long as 70 years.
Strongylocentrotus purpuratus is used as a model organism and its genome was the first echinoderm genome to be sequenced.
Role in biomedical research
The initial discovery of three distinct eukaryotic DNA-dependent RNA polymerases was made using S. purpuratus as a model organism. While embryonic development is still a major part of the utilization of the sea urchin, studies on urchin's position as an evolutionary marvel have become increasingly frequent. Orthologs to human diseases have led scientists to investigate potential therapeutic uses for the sequences found in Strongylocentrotus purpuratus. For instance, in 2012, scientists at the University of St Andrews began investigating the "2A" viral region in the S. purpuratus genome which may be useful for Alzheimer's disease and cancer research. The study identified a sequence that can return cells to a 'stem-cell' like state, allowing for better treatment options. The species has also been a candidate in longevity studies, particularly because of its ability to regenerate damaged or aging tissue. Another study comparing 'young' vs. 'old' suggested that even in species with varying lifespans, the 'regenerative potential' was upheld in older specimens as they suffered no significant disadvantages compared to younger ones.
Online model organism database
Echinobase is the model organism database for the purple sea urchin and a number of other echinoderms.
Genome
The genome of the purple sea urchin was completely sequenced and annotated in 2006 by teams of scientists from over 70 institutions including the Kerckhoff Marine Laboratory at the California Institute of Technology as well as the Human Genome Sequencing Center at the Baylor College of Medicine. A new improved version of the purple sea urchin genome, Strongylocentrotus purpuratus v5.0, is now available on Echinobase. S. purpuratus is one of several biomedical research model organisms in cell and developmental biology. The sea urchin is the first animal with a sequenced genome that (1) is a free-living, motile marine invertebrate; (2) has a bilaterally organized embryo but a radial adult body plan; (3) has the endoskeleton and water vascular system found only in echinoderms; and (4) has a nonadaptive immune system that is unique in the enormous complexity of its receptor repertoire.
The sea urchin genome is estimated to encode about 23,500 genes. The S. purpuratus has 353 protein kinases, containing members of 97% of human kinase subfamilies. Many of these genes were previously thought to be vertebrate innovations or were only known from groups outside the deuterostomes. The team sequencing the species concluded that some genes are not vertebrate specific as thought previously, while other genes still were found in the urchin but not the chordate.
The genome is largely non-redundant, making it very comparable to vertebrates, but without the complexity. For example, 200 to 700 chemosensory genes were found that lacked introns, a feature typical of vertebrates. Thus the sea urchin genome provides a comparison to our own and those of other deuterostomes, the larger group to which both echinoderms and humans belong. Sea urchins are also the closest living relative to chordates. Using the strictest measure, the purple sea urchin and humans share 7,700 genes. Many of these genes are involved in sensing the environment, a fact surprising for an animal lacking a head structure.
The sea urchin also has a chemical 'defensome' that reacts when stress is sensed to eliminate potentially toxic chemicals. S. purpuratus's immune systems contains innate pathogen receptors like Toll-like receptors and genes that encode for LRR . There were genes identified for Biomineralization that were not counterparts of the typical human vertebrate variety SCCPs, and encode for transmembrane proteins like P16. Many orthologs exist for genes associated with human diseases, such as Reelin (from Norman-Roberts lissencephaly syndrome) and many cytoskeletal proteins of the Usher syndrome network like usherin and VLGR1.
Increasing carbon dioxide concentrations affect the epigenome, gene expression, and phenotype of the purple sea urchin. Carbon dioxide concentration also reduces the size of its larvae, which indicates that fitness of the larvae could be negatively impacted.
Ecology
The purple sea urchin, along with sea otters and abalones, is a prominent member of the kelp forest community. The purple sea urchin also plays a key role in the disappearance of kelp forests that is currently occurring due to climate change; when urchins completely eliminate kelp from an area, an urchin barren results.
Use as food
Sea urchins like the purple sea urchin have been used for food by the indigenous peoples of California, who ate the yellow egg mass raw.
In California, the peak gonad growth season (and therefore peak of edibility) is September–October. Early in the season, the gonads are still growing and the yield will be smaller. From November onwards the gonads are developed, however harvesting stress can induce spawning, decreasing quality.
| Biology and health sciences | Echinoderms | Animals |
69933 | https://en.wikipedia.org/wiki/Vanguard-class%20submarine | Vanguard-class submarine | The Vanguard class is a class of nuclear-powered ballistic missile submarines (SSBNs) in service with the Royal Navy. The class was introduced in 1994 as part of the Trident nuclear programme, and comprises four vessels: , , and , built between 1986 and 1999 at Barrow-in-Furness by Vickers Shipbuilding and Engineering, now owned by BAE Systems. All four boats are based at HM Naval Base Clyde (HMS Neptune), west of Glasgow, Scotland.
Since the decommissioning of the Royal Air Force WE.177 free-fall thermonuclear weapons during March 1998, the four Vanguard submarines are the sole platforms for the United Kingdom's nuclear weapons. Each submarine is armed with up to 16 UGM-133 Trident II missiles. The class is scheduled to be replaced starting in the early 2030s with the Dreadnought-class submarine.
Development
Trident programme
Beginning in the late 1960s, the United Kingdom operated four s, each armed with sixteen US-built UGM-27 Polaris missiles. The Polaris missile was supplied to Britain following the terms of the 1963 Polaris Sales Agreement. This nuclear deterrent system was known as the UK Polaris programme. In the early 1980s the British government began studies examining options for replacing the Resolution-class submarines and their Polaris missiles, both of which would be approaching the end of their service lives within little over a decade. On 24 January 1980, the House of Commons backed government policy, by 308 votes to 52, to retain an independent nuclear deterrent. Options that were examined included:
A British designed and built ballistic missile; Although Britain had had no capability in this field since the 1960s, it was considered "not to be impossible". However, it would be very expensive, would be full of uncertainty and would not be available within the required time period. Thus the option was considered "unattractive".
Retain Polaris, but fitted on a new submarine class; This option would have a cheaper "initial capital cost", but would lack in terms of required capability and reliability. Also, it was concluded that any initial capital savings would have been lost beyond the 1990s, due to the high cost of sustaining a small stockpile of bespoke missiles kept only in British service.
A European solution and the US UGM-73 Poseidon were also briefly considered, but ultimately rejected, primarily on capability, cost and uncertainty grounds. The clear favourite was the UGM-96 Trident I, which as well as being a cost-effective solution – given the US would also operate the missile in vast numbers – also delivered the overall best long-term capability insurances against Soviet advancements in ballistic missile defence. Subsequently, on 10 July 1980, the then Prime Minister Margaret Thatcher wrote to US President Jimmy Carter requesting the purchase of Trident I missiles on a similar basis as the 1963 Polaris Sales Agreement. However, following the acceleration of the US UGM-133 Trident II missiles, Thatcher wrote to US President Ronald Reagan in 1982 requesting the United Kingdom be allowed to procure the improved system instead. An agreement was made in March 1982 between the two countries, and under the agreement, Britain made a 5% research and development contribution.
Design and construction
The Vanguard class were designed in the early 1980s by the Ministry of Defence, acting in one of its last Royal Navy warship design authority roles. The guidance drawings were then supplied for detailed design development by Vickers Shipbuilding and Engineering (VSEL) based at Barrow-in-Furness, now BAE Systems Maritime – Submarines. They were designed from the outset as nuclear-powered ballistic missile submarines, able to accommodate the UGM-133 Trident II missiles. As such, the missile compartment is based on the same system used on the American , which is also equipped with the UGM-133 Trident II. This requirement led to the Vanguard-class design being significantly larger than the previous Polaris-equipped Resolution class, and at nearly 16,000 tonnes they are the largest submarines ever built for the Royal Navy.
Due to the large size of the Vanguard-class, the Devonshire Dock Hall in Barrow-in-Furness was built between 1982 and 1986 specifically for the construction of the boats.
Beginning in 1985, both HMNB Clyde and the Royal Naval Armaments Depot Coulport at Faslane underwent extensive redevelopment in preparation for the Vanguard class submarines and Trident II missiles. Rosyth dockyard also underwent significant redevelopment. The work included enhanced "handling, storage, armament processing, berthing, docking, engineering, training and refitting facilities" at an estimates cost of £550 million.
Prime Minister Thatcher laid the keel of the first boat, HMS Vanguard, on 3 September 1986 at the Devonshire Dock Hall. Vanguard was launched in 1992 and commissioned in 1993. The year 1992 saw a debate over whether the fourth vessel, Vengeance, should be cancelled; however, the Ministry of Defence ultimately ordered it in July 1992 and it was commissioned in 1999.
Replacement
The Vanguard class had an originally intended service life of 25 years. This would put the retirement dates for the class at 2018, 2020, 2021, 2024.
On 4 December 2006, then Prime Minister Tony Blair revealed plans to spend up to £20 billion on a new generation of ballistic missile submarines to replace the Vanguard class. In order to reduce costs and show Britain's commitment to the Non-Proliferation Treaty, Blair suggested that submarine numbers could be cut from four to three, while the number of nuclear warheads would be cut by 20% to 160. On 23 September 2009, then Prime Minister Gordon Brown confirmed that this reduction to three submarines was still under consideration. In February 2011, the Defence Secretary Liam Fox stated that four submarines would be needed if the UK was to retain a credible nuclear deterrent. On 18 May 2011 the British government approved the initial assessment phase for the construction of a new class of four submarines, paving the way for the ordering of the first long-lead items and preparations for the main build to begin in the future. This new class of submarine, now known as the Dreadnought class, will retain the current Trident II missiles, and will incorporate a new 'PWR3' nuclear reactor as well as technology developed for the nuclear-powered fleet submarines of the Royal Navy.
A vote on the Trident renewal programme was held in the House of Commons on 18 July 2016, and determined that the UK should proceed with construction of the next generation of submarines. The motion passed with a significant majority of 472 MPs voting in favour and 117 against. The MoD put the cost of building, testing and commissioning the replacement vessels at £31 billion (plus a contingency fund of £10 billion) over 35 years, or about 0.2 per cent of government spending, or 6 per cent of defence spending, every year. It is expected the new fleet of submarines will come into operation starting 2028 at the earliest and certainly by the 2030s. The Dreadnought class will extend the life of the Trident programme until at least the 2060s.
Characteristics
Weapons and systems
The Vanguard-class submarines are equipped with 16 ballistic missile tubes. However, as of the 2010 Strategic Defence and Security Review, the Royal Navy loads only eight of the missile tubes with the Trident II submarine-launched ballistic missiles, each armed with up to eight nuclear warheads. In addition to the missile tubes, the submarines are fitted with four 21 inch (533 mm) torpedo tubes and carry the Spearfish heavyweight torpedo, allowing them to engage submerged or surface targets at ranges up to . Two SSE Mark 10 launchers are also fitted, allowing the boats to deploy Type 2066 and Type 2071 decoys, and a UAP Mark 3 electronic support measures (ESM) intercept system is carried.
The submarines carry the Thales Underwater Systems Type 2054 composite sonar. The Type 2054 is a multi-mode, multi-frequency system, which incorporates the 2046, 2043 and 2082 sonars. The Type 2043 is a hull-mounted active/passive search sonar, the Type 2082 a passive intercept and ranging sonar, and the Type 2046 a towed array sonar operating at very low frequency providing a passive search capability. The fleet is in the process of having the sonars refitted to include open-architecture processing using commercial off-the-shelf technology. Navigational search capability is provided by a Type 1007 I-band navigation radar. They will also be fitted with the new Common Combat System. Two periscopes are carried, a CK51 search model and a CH91 attack model. Both have TV and thermal imaging cameras in addition to conventional optics.
A specialised Submarine Command System (SMCS) was originally developed for the Vanguard boats and was later used on the .
Propulsion
A new pressurised water reactor, the Rolls-Royce PWR 2, was designed for the Vanguard class. The PWR 2 has double the service life of previous models, and it is estimated that a Vanguard-class submarine could circumnavigate the world 40 times without refuelling. Furthermore, during their long-overhaul refit periods, a 'Core H' reactor is fitted to each of the boats, ensuring that none of the submarines will require further re-fuelling for the rest of their service lives. The reactor drives two GEC steam turbines linked to a single shaft pump jet propulsor, giving the submarines a maximum submerged speed of over . Auxiliary power is provided by a pair of 6 MW steam-turbine generators supplied by WH Allen, (later known as NEI Allen, Allen Power & Rolls-Royce), and two 905 kWb Paxman diesel generators for provision of backup power supply.
Nuclear warheads
British nuclear weapons are designed and developed by the UK's Atomic Weapons Establishment. The boats are capable of deploying with a maximum of 192 independently targetable warheads, or MIRVs, with immediate readiness to fire. However, as a result of a decision taken by the 1998 Strategic Defence Review this was reduced to 48 warheads with a readiness to fire reduced 'to days rather than minutes'. Furthermore, the total number of warheads maintained by the United Kingdom was reduced to approximately 200, with a total of 58 Trident missiles. The 2010 Strategic Defence and Security Review reduced this number further and the submarines will put to sea in future with a reduced total of 40 warheads and a reduced missile load of 8 (from a maximum possible 16). The number of operationally available nuclear warheads is to be reduced 'from fewer than 160 to no more than 120 and the total UK nuclear weapon stockpile will number no more than 180.
On 16 March 2021 Prime Minister Boris Johnson unveiled his government's 10-year plan to boost international trade and deploy soft power around the world with an aspiration of creating a “Global Britain”. In a document called Global Britain in a competitive age, this plan raised the cap on the number of nuclear warheads aboard the Royal Navy's Trident submarines from 180 to 260. The document also vowed to maintain a fleet of four nuclear-armed submarines so Britain would always have one at sea.
Boats of the class
In fiction
The 2021 BBC TV series Vigil is set on board a fictional Vanguard-class submarine named HMS Vigil. Further boats of the class named HMS Virtue, HMS Vanquish along with the real HMS Vanguard are mentioned.
| Technology | Naval warfare | null |
69939 | https://en.wikipedia.org/wiki/Elliptic%20function | Elliptic function | In the mathematical field of complex analysis, elliptic functions are special kinds of meromorphic functions, that satisfy two periodicity conditions. They are named elliptic functions because they come from elliptic integrals. Those integrals are in turn named elliptic because they first were encountered for the calculation of the arc length of an ellipse.
Important elliptic functions are Jacobi elliptic functions and the Weierstrass -function.
Further development of this theory led to hyperelliptic functions and modular forms.
Definition
A meromorphic function is called an elliptic function, if there are two -linear independent complex numbers such that
and .
So elliptic functions have two periods and are therefore doubly periodic functions.
Period lattice and fundamental domain
If is an elliptic function with periods it also holds that
for every linear combination with .
The abelian group
is called the period lattice.
The parallelogram generated by and
is a fundamental domain of acting on .
Geometrically the complex plane is tiled with parallelograms. Everything that happens in one fundamental domain repeats in all the others. For that reason we can view elliptic function as functions with the quotient group as their domain. This quotient group, called an elliptic curve, can be visualised as a parallelogram where opposite sides are identified, which topologically is a torus.
Liouville's theorems
The following three theorems are known as Liouville's theorems (1847).
1st theorem
A holomorphic elliptic function is constant.
This is the original form of Liouville's theorem and can be derived from it. A holomorphic elliptic function is bounded since it takes on all of its values on the fundamental domain which is compact. So it is constant by Liouville's theorem.
2nd theorem
Every elliptic function has finitely many poles in and the sum of its residues is zero.
This theorem implies that there is no elliptic function not equal to zero with exactly one pole of order one or exactly one zero of order one in the fundamental domain.
3rd theorem
A non-constant elliptic function takes on every value the same number of times in counted with multiplicity.
Weierstrass ℘-function
One of the most important elliptic functions is the Weierstrass -function. For a given period lattice it is defined by
It is constructed in such a way that it has a pole of order two at every lattice point. The term is there to make the series convergent.
is an even elliptic function; that is, .
Its derivative
is an odd function, i.e.
One of the main results of the theory of elliptic functions is the following: Every elliptic function with respect to a given period lattice can be expressed as a rational function in terms of and .
The -function satisfies the differential equation
where and are constants that depend on . More precisely, and , where and are so called Eisenstein series.
In algebraic language, the field of elliptic functions is isomorphic to the field
,
where the isomorphism maps to and to .
Relation to elliptic integrals
The relation to elliptic integrals has mainly a historical background. Elliptic integrals had been studied by Legendre, whose work was taken on by Niels Henrik Abel and Carl Gustav Jacobi.
Abel discovered elliptic functions by taking the inverse function of the elliptic integral function
with .
Additionally he defined the functions
and
.
After continuation to the complex plane they turned out to be doubly periodic and are known as Abel elliptic functions.
Jacobi elliptic functions are similarly obtained as inverse functions of elliptic integrals.
Jacobi considered the integral function
and inverted it: . stands for sinus amplitudinis and is the name of the new function. He then introduced the functions cosinus amplitudinis and delta amplitudinis, which are defined as follows:
.
Only by taking this step, Jacobi could prove his general transformation formula of elliptic integrals in 1827.
History
Shortly after the development of infinitesimal calculus the theory of elliptic functions was started by the Italian mathematician Giulio di Fagnano and the Swiss mathematician Leonhard Euler. When they tried to calculate the arc length of a lemniscate they encountered problems involving integrals that contained the square root of polynomials of degree 3 and 4. It was clear that those so called elliptic integrals could not be solved using elementary functions. Fagnano observed an algebraic relation between elliptic integrals, what he published in 1750. Euler immediately generalized Fagnano's results and posed his algebraic addition theorem for elliptic integrals.
Except for a comment by Landen his ideas were not pursued until 1786, when Legendre published his paper Mémoires sur les intégrations par arcs d’ellipse. Legendre subsequently studied elliptic integrals and called them elliptic functions. Legendre introduced a three-fold classification –three kinds– which was a crucial simplification of the rather complicated theory at that time. Other important works of Legendre are: Mémoire sur les transcendantes elliptiques (1792), Exercices de calcul intégral (1811–1817), Traité des fonctions elliptiques (1825–1832). Legendre's work was mostly left untouched by mathematicians until 1826.
Subsequently, Niels Henrik Abel and Carl Gustav Jacobi resumed the investigations and quickly discovered new results. At first they inverted the elliptic integral function. Following a suggestion of Jacobi in 1829 these inverse functions are now called elliptic functions. One of Jacobi's most important works is Fundamenta nova theoriae functionum ellipticarum which was published 1829. The addition theorem Euler found was posed and proved in its general form by Abel in 1829. In those days the theory of elliptic functions and the theory of doubly periodic functions were considered to be different theories. They were brought together by Briot and Bouquet in 1856. Gauss discovered many of the properties of elliptic functions 30 years earlier but never published anything on the subject.
| Mathematics | Functions: General | null |
69955 | https://en.wikipedia.org/wiki/Hydrazine | Hydrazine | Hydrazine is an inorganic compound with the chemical formula . It is a simple pnictogen hydride, and is a colourless flammable liquid with an ammonia-like odour. Hydrazine is highly hazardous unless handled in solution as, for example, hydrazine hydrate ().
Hydrazine is mainly used as a foaming agent in preparing polymer foams, but applications also include its uses as a precursor to pharmaceuticals and agrochemicals, as well as a long-term storable propellant for in-space spacecraft propulsion. Additionally, hydrazine is used in various rocket fuels and to prepare the gas precursors used in air bags. Hydrazine is used within both nuclear and conventional electrical power plant steam cycles as an oxygen scavenger to control concentrations of dissolved oxygen in an effort to reduce corrosion.
, approximately 120,000 tons of hydrazine hydrate (corresponding to a 64% solution of hydrazine in water by weight) were manufactured worldwide per year.
Hydrazines are a class of organic substances derived by replacing one or more hydrogen atoms in hydrazine by an organic group.
Etymology and history
The name "hydrazine" was coined by Emil Fischer in 1875; he was trying to produce organic compounds that consisted of mono-substituted hydrazine. By 1887, Theodor Curtius had produced hydrazine sulfate by treating organic diazides with dilute sulfuric acid; however, he was unable to obtain pure hydrazine, despite repeated efforts. Pure anhydrous hydrazine was first prepared by the Dutch chemist Lobry de Bruyn in 1895.
The nomenclature is a bi-valent form, with prefix hydr- used to indicate the presence of hydrogen atoms and suffix beginning with -az-, from azote, the French word for nitrogen.
Applications
Gas producers and propellants
The largest use of hydrazine is as a precursor to blowing agents. Specific compounds include azodicarbonamide and azobisisobutyronitrile, which produce of gas per gram of precursor. In a related application, sodium azide, the gas-forming agent in air bags, is produced from hydrazine by reaction with sodium nitrite.
Hydrazine is also used as a long-term storable propellant on board space vehicles, such as the Dawn mission to Ceres and Vesta, and to both reduce the concentration of dissolved oxygen in and control pH of water used in large industrial boilers. The F-16 fighter jet, Eurofighter Typhoon, Space Shuttle, and U-2 spy plane use hydrazine to fuel their Emergency Start System in the event of an engine stall.
Precursor to pesticides and pharmaceuticals
Hydrazine is a precursor to several pharmaceuticals and pesticides. Often these applications involve conversion of hydrazine to heterocyclic rings such as pyrazoles and pyridazines. Examples of commercialized bioactive hydrazine derivatives include cefazolin, rizatriptan, anastrozole, fluconazole, metazachlor, metamitron, metribuzin, paclobutrazol, diclobutrazole, propiconazole, hydrazine sulfate, diimide, triadimefon, and the diacylhydrazine insecticides.
Hydrazine compounds can be effective as active ingredients in insecticides, miticides, nematicides, fungicides, antiviral agents, attractants, herbicides, or plant growth regulators.
Small-scale, niche, and research
The Italian catalyst manufacturer Acta (chemical company) has proposed using hydrazine as an alternative to hydrogen in fuel cells. The chief benefit of using hydrazine is that it can produce over 200 mW/cm2 more than a similar hydrogen cell without requiring (expensive) platinum catalysts. Because the fuel is liquid at room temperature, it can be handled and stored more easily than hydrogen. By storing the hydrazine in a tank full of a double-bonded carbon-oxygen carbonyl, the fuel reacts and forms a safe solid called hydrazone. By then flushing the tank with warm water, the liquid hydrazine hydrate is released. Hydrazine has a higher electromotive force of 1.56 V compared to 1.23 V for hydrogen. Hydrazine breaks down in the cell to form nitrogen and hydrogen which bonds with oxygen, releasing water. Hydrazine was used in fuel cells manufactured by Allis-Chalmers Corp., including some that provided electric power in space satellites in the 1960s.
A mixture of 63% hydrazine, 32% hydrazine nitrate and 5% water is a standard propellant for experimental bulk-loaded liquid propellant artillery. The propellant mixture above is one of the most predictable and stable, with a flat pressure profile during firing. Misfires are usually caused by inadequate ignition. The movement of the shell after a mis-ignition causes a large bubble with a larger ignition surface area, and the greater rate of gas production causes very high pressure, sometimes including catastrophic tube failures (i.e. explosions). From January–June 1991, the U.S. Army Research Laboratory conducted a review of early bulk-loaded liquid propellant gun programs for possible relevance to the electrothermal chemical propulsion program.
The United States Air Force (USAF) regularly uses H-70, a 70% hydrazine 30% water mixture, in operations employing the General Dynamics F-16 Fighting Falcon fighter aircraft and the Lockheed U-2 "Dragon Lady" reconnaissance aircraft. The single jet engine F-16 utilizes hydrazine to power its Emergency Power Unit (EPU), which provides emergency electrical and hydraulic power in the event of an engine flame out. The EPU activates automatically, or manually by pilot control, in the event of loss of hydraulic pressure or electrical power in order to provide emergency flight controls. The single jet engine U-2 utilizes hydrazine to power its Emergency Starting System (ESS), which provides a highly reliable method to restart the engine in flight in the event of a stall.
Rocket fuel
Hydrazine was first used as a component in rocket fuels during World War II. A 30% mix by weight with 57% methanol (named M-Stoff in the German Luftwaffe) and 13% water was called C-Stoff by the Germans. The mixture was used to power the Messerschmitt Me 163B rocket-powered fighter plane, in which the German high test peroxide T-Stoff was used as an oxidizer. Unmixed hydrazine was referred to as B-Stoff by the Germans, a designation also used later for the ethanol/water fuel for the V-2 missile.
Hydrazine is used as a low-power monopropellant for the maneuvering (RCS/Reaction control system) thrusters of spacecraft, and was used to power the Space Shuttle's auxiliary power units (APUs). In addition, mono-propellant hydrazine-fueled rocket engines are often used in terminal descent of spacecraft. Such engines were used on the Viking program landers in the 1970s as well as the Mars landers Phoenix (May 2008), Curiosity (August 2012), and Perseverance (February 2021).
During the Soviet space program, unsymmetrical dimethylhydrazine (also discovered by Fischer in 1875) was used instead of hydrazine. Together with nitric oxidizers it became known as "devil's venom" due to its highly dangerous nature.
In all hydrazine mono-propellant engines, the hydrazine is passed over a catalyst such as iridium metal supported by high-surface-area alumina (aluminium oxide), which causes it to decompose into ammonia (), nitrogen gas (), and hydrogen () gas according to the three following reactions:
Reaction 1:
Reaction 2:
Reaction 3:
The first two reactions are extremely exothermic (the catalyst chamber can reach 800 °C in a matter of milliseconds,) and they produce large volumes of hot gas from a small volume of liquid, making hydrazine a fairly efficient thruster propellant with a vacuum specific impulse of about 220 seconds. Reaction 2 is the most exothermic, but produces a smaller number of molecules than that of reaction 1. Reaction 3 is endothermic and reverts the effect of reaction 2 back to the same effect as reaction 1 alone (lower temperature, greater number of molecules). The catalyst structure affects the proportion of the that is dissociated in reaction 3; a higher temperature is desirable for rocket thrusters, while more molecules are desirable when the reactions are intended to produce greater quantities of gas.
Since hydrazine is a solid below 2 °C, it is not suitable as a general purpose rocket propellant for military applications. Other variants of hydrazine that are used as rocket fuel are monomethylhydrazine, , also known as MMH (melting point −52 °C), and unsymmetrical dimethylhydrazine, , also known as UDMH (melting point −57 °C). These derivatives are used in two-component rocket fuels, often together with dinitrogen tetroxide, . A 50:50 mixture by weight of hydrazine and UDMH was used in the engine of the service propulsion system of the Apollo command and service module, both the ascent and descent engines of the Apollo Lunar Module and Titan II ICBMs and is known as Aerozine 50. These reactions are extremely exothermic, and the burning is also hypergolic (it starts burning without any external ignition).
There are ongoing efforts in the aerospace industry to find a replacement for hydrazine, given its potential ban across the European Union. Promising alternatives include nitrous oxide-based propellant combinations, with development being led by commercial companies Dawn Aerospace, Impulse Space, and Launcher. The first nitrous oxide-based system ever flown in space was by D-Orbit onboard their ION Satellite Carrier in 2021, using six Dawn Aerospace B20 thrusters.
Occupational hazards
Health effects
Potential routes of hydrazine exposure include dermal, ocular, inhalation and ingestion.
Hydrazine exposure can cause skin irritation/contact dermatitis and burning, irritation to the eyes/nose/throat, nausea/vomiting, shortness of breath, pulmonary edema, headache, dizziness, central nervous system depression, lethargy, temporary blindness, seizures and coma. Exposure can also cause organ damage to the liver, kidneys and central nervous system. Hydrazine is documented as a strong skin sensitizer with potential for cross-sensitization to hydrazine derivatives following initial exposure. In addition to occupational uses reviewed above, exposure to hydrazine is also possible in small amounts from tobacco smoke.
The official U.S. guidance on hydrazine as a carcinogen is mixed but generally there is recognition of potential cancer-causing effects. The National Institute for Occupational Safety and Health (NIOSH) lists it as a "potential occupational carcinogen". The National Toxicology Program (NTP) finds it is "reasonably anticipated to be a human carcinogen". The American Conference of Governmental Industrial Hygienists (ACGIH) grades hydrazine as "A3—confirmed animal carcinogen with unknown relevance to humans". The U.S. Environmental Protection Agency (EPA) grades it as "B2—a probable human carcinogen based on animal study evidence".
The International Agency for Research on Cancer (IARC) rates hydrazine as "2A—probably carcinogenic to humans" with a positive association observed between hydrazine exposure and lung cancer. Based on cohort and cross-sectional studies of occupational hydrazine exposure, a committee from the National Academies of Sciences, Engineering and Medicine concluded that there is suggestive evidence of an association between hydrazine exposure and lung cancer, with insufficient evidence of association with cancer at other sites. The European Commission's Scientific Committee on Occupational Exposure Limits (SCOEL) places hydrazine in carcinogen "group B—a genotoxic carcinogen". The genotoxic mechanism the committee cited references hydrazine's reaction with endogenous formaldehyde and formation of a DNA-methylating agent.
In the event of a hydrazine exposure-related emergency, NIOSH recommends removing contaminated clothing immediately, washing skin with soap and water, and for eye exposure removing contact lenses and flushing eyes with water for at least 15 minutes. NIOSH also recommends anyone with potential hydrazine exposure to seek medical attention as soon as possible. There are no specific post-exposure laboratory or medical imaging recommendations, and the medical work-up may depend on the type and severity of symptoms. The World Health Organization (WHO) recommends potential exposures be treated symptomatically with special attention given to potential lung and liver damage. Past cases of hydrazine exposure have documented success with pyridoxine (vitamin B6) treatment.
Occupational exposure limits
NIOSH Recommended Exposure Limit (REL): 0.03 ppm (0.04 mg/m3) 2-hour ceiling
OSHA Permissible Exposure Limit (PEL): 1 ppm (1.3 mg/m3) 8-hour Time Weighted Average
ACGIH Threshold Limit Value (TLV): 0.01 ppm (0.013 mg/m3) 8-hour Time Weighted Average
The odor threshold for hydrazine is 3.7 ppm, thus if a worker is able to smell an ammonia-like odor then they are likely over the exposure limit. However, this odor threshold varies greatly and should not be used to determine potentially hazardous exposures.
For aerospace personnel, the United States Air Force uses an emergency exposure guideline, developed by the National Academy of Sciences Committee on Toxicology, which is utilized for non-routine exposures of the general public and is called the Short-Term Public Emergency Exposure Guideline (SPEGL). The SPEGL, which does not apply to occupational exposures, is defined as the acceptable peak concentration for unpredicted, single, short-term emergency exposures of the general public and represents rare exposures in a worker's lifetime. For hydrazine the 1-hour SPEGL is 2 ppm, with a 24-hour SPEGL of 0.08 ppm.
Handling and medical surveillance
A complete surveillance programme for hydrazine should include systematic analysis of biologic monitoring, medical screening and morbidity/mortality information. The CDC recommends surveillance summaries and education be provided for supervisors and workers. Pre-placement and periodic medical screening should be conducted with specific focus on potential effects of hydrazine upon functioning of the eyes, skin, liver, kidneys, hematopoietic, nervous and respiratory systems.
Common controls used for hydrazine include process enclosure, local exhaust ventilation and personal protective equipment (PPE). Guidelines for hydrazine PPE include non-permeable gloves and clothing, indirect-vent splash resistant goggles, face shield and in some cases a respirator. The use of respirators for the handling of hydrazine should be the last resort as a method of controlling worker exposure. In cases where respirators are needed, proper respirator selection and a complete respiratory protection program consistent with OSHA guidelines should be implemented.
For USAF personnel, Air Force Occupational Safety and Health (AFOSH) Standard 48-8, Attachment 8 reviews the considerations for occupational exposure to hydrazine in missile, aircraft and spacecraft systems. Specific guidance for exposure response includes mandatory emergency shower and eyewash stations and a process for decontaminating protective clothing. The guidance also assigns responsibilities and requirements for proper PPE, employee training, medical surveillance and emergency response. USAF bases requiring the use of hydrazine generally have specific base regulations governing local requirements for safe hydrazine use and emergency response.
Molecular structure
Hydrazine, , contains two amine groups connected by a single bond between the two nitrogen atoms. Each subunit is pyramidal. The structure of the free molecules was determined by gas electron diffraction and microwave spectroscopy. The N–N single bond length is 1.447(2) Å (144.7(2) pm), the N-H distance is 1.015(2) Å, the N-N-H angles are 106(2)° and 112(2)°, the H-N-H angle is 107°. The molecule adopts a gauche conformation with a torsion angle of 91(2)° (dihedral angle between the planes containing the N-N bond and the bisectors of the H-N-H angles). The rotational barrier is twice that of ethane. These structural properties resemble those of gaseous hydrogen peroxide, which adopts a "skewed" anticlinal conformation, and also experiences a strong rotational barrier.
The structure of solid hydrazine was determined by X-ray diffraction. In this phase the N-N bond has a length of 1.46 Å and the nearest non-bonded distances are 3.19, 3.25 and 3.30 Å.
Synthesis and production
Diverse synthetic pathways for hydrazine production have been developed. The key step is the creation of the N–N single bond. The many routes can be divided into those that use chlorine oxidants (and generate salt) and those that do not.
Oxidation of ammonia via oxaziridines from peroxide
Hydrazine can be synthesized from ammonia and hydrogen peroxide with a ketone catalyst, in a procedure called the Peroxide process (sometimes called Pechiney-Ugine-Kuhlmann process, the Atofina–PCUK cycle, or ketazine process). The net reaction is:
In this route, the ketone and ammonia first condense to give the imine, which is oxidised by hydrogen peroxide to the oxaziridine, a three-membered ring containing carbon, oxygen, and nitrogen. Next, the oxaziridine gives the hydrazone by treatment with ammonia, which process creates the nitrogen-nitrogen single bond. This hydrazone condenses with one more equivalent of ketone.
The resulting azine is hydrolyzed to give hydrazine and regenerate the ketone, methyl ethyl ketone:
Unlike most other processes, this approach does not produce a salt as a by-product.
Chlorine-based oxidations
The Olin Raschig process, first announced in 1907, produces hydrazine from sodium hypochlorite (the active ingredient in many bleaches) and ammonia without the use of a ketone catalyst. This method relies on the reaction of monochloramine with ammonia to create the N–N single bond as well as a hydrogen chloride byproduct:
Related to the Raschig process, urea can be oxidized instead of ammonia. Again sodium hypochlorite serves as the oxidant. The net reaction is shown:
The process generates significant by-products and is mainly practised in Asia.
The Bayer Ketazine Process is the predecessor to the peroxide process. It employs sodium hypochlorite as oxidant instead of hydrogen peroxide. Like all hypochlorite-based routes, this method produces an equivalent of salt for each equivalent of hydrazine.
Reactions
Acid-base behavior
Hydrazine forms a monohydrate that is denser (1.032 g/cm3) than the anhydrous form (1.021 g/cm3). Hydrazine has basic (alkali) chemical properties comparable to those of ammonia:
, Kb = 1.3 × 10−6, pKb = 5.9
(for ammonia Kb = 1.78 × 10−5)
It is difficult to diprotonate:
, Kb = 8.4 × 10−16, pKb = 15
Exposure to extremely strong bases or alkali metals generates deprotonated hydrazide salts. Most explode on exposure to air or moisture.
Redox reactions
Ideally, the combustion of hydrazine in oxygen produces nitrogen and water:
An excess of oxygen gives oxides of nitrogen, including nitrogen monoxide and nitrogen dioxide:
The heat of combustion of hydrazine in oxygen (air) is 19.41 MJ/kg (8345 BTU/lb).
Hydrazine is a convenient reductant because the by-products are typically nitrogen gas and water. This property makes it useful as an antioxidant, an oxygen scavenger, and a corrosion inhibitor in water boilers and heating systems. It also directly reduces salts of less active metals (e.g., bismuth, arsenic, copper, mercury, silver, lead, platinum, and palladium) to the element. That property has commercial application in electroless nickel plating and plutonium extraction from nuclear reactor waste. Some colour photographic processes also use a weak solution of hydrazine as a stabilising wash, as it scavenges dye coupler and unreacted silver halides. Hydrazine is the most common and effective reducing agent used to convert graphene oxide (GO) to reduced graphene oxide (rGO) via hydrothermal treatment.
Hydrazinium salts
Hydrazine can be protonated to form various solid salts of the hydrazinium cation , by treatment with mineral acids. A common salt is hydrazinium hydrogensulfate, . Hydrazinium hydrogensulfate was investigated as a treatment of cancer-induced cachexia, but proved ineffective.
Double protonation gives the hydrazinium dication or hydrazinediium, , of which various salts are known.
Organic chemistry
Hydrazines are part of many organic syntheses, often those of practical significance in pharmaceuticals (see applications section), as well as in textile dyes and in photography.
Hydrazine is used in the Wolff–Kishner reduction, a reaction that transforms the carbonyl group of a ketone into a methylene bridge (or an aldehyde into a methyl group) via a hydrazone intermediate. The production of the highly stable dinitrogen from the hydrazine derivative helps to drive the reaction.
Being bifunctional, with two amines, hydrazine is a key building block for the preparation of many heterocyclic compounds via condensation with a range of difunctional electrophiles. With 2,4-pentanedione, it condenses to give the 3,5-dimethylpyrazole. In the Einhorn-Brunner reaction hydrazines react with imides to give triazoles.
Being a good nucleophile, can attack sulfonyl halides and acyl halides. The tosylhydrazine also forms hydrazones upon treatment with carbonyls.
Hydrazine is used to cleave N-alkylated phthalimide derivatives. This scission reaction allows phthalimide anion to be used as amine precursor in the Gabriel synthesis.
Hydrazone formation
Illustrative of the condensation of hydrazine with a simple carbonyl is its reaction with acetone to give the acetone azine. The latter reacts further with hydrazine to yield acetone hydrazone:
The propanone azine is an intermediate in the Atofina-PCUK process. Direct alkylation of hydrazines with alkyl halides in the presence of base yields alkyl-substituted hydrazines, but the reaction is typically inefficient due to poor control on level of substitution (same as in ordinary amines). The reduction of hydrazones to hydrazines present a clean way to produce 1,1-dialkylated hydrazines.
In a related reaction, 2-cyanopyridines react with hydrazine to form amide hydrazides, which can be converted using 1,2-diketones into triazines.
Biochemistry
Hydrazine is the intermediate in the anaerobic oxidation of ammonia (anammox) process. It is produced by some yeasts and the open ocean bacterium anammox (Brocadia anammoxidans).
The false morel produces the poison gyromitrin which is an organic derivative of hydrazine that is converted to monomethylhydrazine by metabolic processes. Even the most popular edible "button" mushroom Agaricus bisporus produces organic hydrazine derivatives, including agaritine, a hydrazine derivative of an amino acid, and gyromitrin.
In popular culture
In the fictional book The Martian (also adapted to a feature film) the titular character uses an iridium catalyst to separate hydrogen gas from surplus hydrazine fuel, which he then burns to generate water for survival.
| Physical sciences | Hydrogen compounds | Chemistry |
69971 | https://en.wikipedia.org/wiki/Hornbill | Hornbill | Hornbills are birds found in tropical and subtropical Africa, Asia and Melanesia of the family Bucerotidae. They are characterized by a long, down-curved bill which is frequently brightly coloured and sometimes has a horny casque on the upper mandible. Hornbills have a two-lobed kidney. They are the only birds in which the first and second neck vertebrae (the atlas and axis respectively) are fused together; this probably provides a more stable platform for carrying the bill. The family is omnivorous, feeding on fruit and small animals. They are monogamous breeders nesting in natural cavities in trees and sometimes cliffs. A number of mainly insular species of hornbill with small ranges are threatened with extinction, mainly in Southeast Asia.
In the Neotropical realm, toucans occupy the hornbills' ecological niche, an example of convergent evolution. Despite their close appearances, the two groups are not very closely related, with toucans being allied with the woodpeckers, honeyguides and several families of barbet, while hornbills (and their close relatives the ground hornbills) are allied with the hoopoes and wood-hoopoes.
Description
Hornbills show considerable variation in size and colors. The smallest species is the black dwarf hornbill (Tockus hartlaubi), at and in length. The largest and most massive species appears to be the southern ground hornbill which has an average weight of , and can weigh up to and span about across the wings. Other species rival the southern ground species in length, at up to about , including the Abyssinian ground hornbill (Bucorvus abyssinicus), the great hornbill (Buceros bicornis) and, probably the longest of all (perhaps exceeding ) thanks in part to its extended tail feathers, the helmeted hornbill (Rhinoplax vigil). Males are always bigger than the females, though the extent to which this is true varies according to species. The extent of sexual dimorphism also varies with body parts. For example, the difference in body mass between males and females is 1–17%, but the variation is 8–30% for bill length and 1–21% in wing length.
The most distinctive feature of the hornbills is the heavy bill, supported by powerful neck muscles as well as by the fused vertebrae. The large bill assists in fighting, preening, constructing the nest, and catching prey. A feature unique to the hornbills is the casque, a hollow structure that runs along the upper mandible. In some species it is barely perceptible and appears to serve no function beyond reinforcing the bill. In other species it is quite large, is reinforced with bone, and has openings between the hollow centre, allowing it to serve as a resonator for calls. In the helmeted hornbill the casque is not hollow but is filled with hornbill ivory and is used as a battering ram in dramatic aerial jousts. Aerial casque-butting has also been reported in the great hornbill.
The plumage of hornbills is typically black, grey, white, or brown, and is frequently offset by bright colours on the bill, or by patches of bare coloured skin on the face or wattles. Some species exhibit sexual dichromatism, where the colouration of soft parts varies by sex.
Hornbills possess binocular vision, although unlike most birds with this type of vision, the bill intrudes on their visual field. This allows them to see their own bill tip and aids in precision handling of food objects with their bill. The eyes are also protected by large eyelashes which act as a sunshade.
Distribution and habitat
The Bucerotidae include about 55 living species, though a number of cryptic species may yet be split, as has been suggested for the red-billed hornbill. Their distribution includes Sub-Saharan Africa and the Indian subcontinent to the Philippines and the Solomon Islands, but no genus is found in both Africa and Asia. Most are arboreal birds, but the large ground hornbills (Bucorvus), as their name implies, are terrestrial birds of open savanna. Of the 24 species found in Africa, 13 are birds of the more open woodlands and savanna, and some occur even in highly arid environments; the remaining species are found in dense forests. This contrasts with Asia, where a single species occurs in open savanna and the remainder are forest species. The Indian subcontinent has 10 species of hornbills, of which 9 are found in India and adjoining countries, while the Sri Lanka grey hornbill is restricted to the island. The most common widespread species in the Indian subcontinent is the Indian grey hornbill.
According to the International Union for Conservation of Nature (IUCN), Indonesia has 13 hornbill species: 9 of them exist in Sumatra, and the rest exist in Sumba, Sulawesi, Papua and Kalimantan. Kalimantan has the same hornbill species as Sumatra, except that the great hornbill is not found there. Meanwhile, the neighboring archipelago of the Philippines has 11 hornbill species, all of which are endemic to certain small islands of the country, making them one of the most endangered hornbills in the world. In the Neogene (at least in the late Miocene), hornbills inhabited North Africa and South Europe. Their remains have been found in Morocco and Bulgaria. The oldest known hornbill is from the Early Miocene of Uganda, around 19 million years ago, which is similar to modern Tockus.
Behaviour and ecology
Hornbills are diurnal, generally travelling in pairs or small family groups. Larger flocks sometimes form outside the breeding season. The largest assemblies of hornbills form at some roosting sites, where as many as 2400 individual birds may be found.
Diet
Hornbills are omnivorous birds, eating fruit, insects and small animals. They cannot swallow food caught at the tip of the beak as their tongues are too short to manipulate it, so they toss it back to the throat with a jerk of the head. While both open country and forest species are omnivorous, species that specialise in feeding on fruit are generally found in forests, while the more carnivorous species are found in open country. Forest-dwelling species of hornbills are considered to be important seed dispersers. Some hornbill species (e.g., Malabar pied-hornbill) even have a great preference for the fruits of the strychnine tree (Strychnos nux-vomica), which contain the potent poison strychnine.
Some hornbills defend a fixed territory. Territoriality is related to diet; fruit sources are often patchily distributed and require long-distance travel to find. Thus, species that specialise in fruit are less territorial.
Breeding
Hornbills generally form monogamous pairs, although some species engage in cooperative breeding. The female lays up to six white eggs in existing holes or crevices, either in trees or rocks. The cavities are usually natural, but some species may nest in the abandoned nests of woodpeckers and barbets. Nesting sites may be used in consecutive breeding seasons by the same pair. Before incubation, the females of all Bucerotinae—sometimes assisted by the male—begin to close the entrance to the nest cavity with a wall made of mud, droppings and fruit pulp. When the female is ready to lay her eggs, the entrance is just large enough for her to enter the nest, and after she has done so, the remaining opening is also all but sealed shut. There is only one narrow aperture, big enough for the male to transfer food to the mother and eventually the chicks. The function of this behaviour is apparently related to protecting the nesting site from rival hornbills. The sealing can be done in just a few hours; at most it takes a few days. After the nest is sealed, the hornbill takes another five days to lay the first egg. Clutch size varies from one or two eggs in the larger species to up to eight eggs for the smaller species. During the incubation period the female undergoes a complete and simultaneous moult. It has been suggested that the darkness of the cavity triggers a hormone involved in moulting. Non-breeding females and males go through a sequential moult. When the chicks and the female are too big to fit in the nest, the mother breaks out the nest and both parents feed the chicks. In some species the mother rebuilds the wall, whereas in others the chicks rebuild the wall unaided. The ground hornbills do not adopt this behaviour, but are conventional cavity-nesters.
Associations with other species
A number of hornbills have associations with other animal species. For example, some species of hornbills in Africa have a mutualistic relationship with dwarf mongooses, foraging together and warning each other of nearby birds of prey and other predators. Other relationships are commensal, for example following monkeys or other animals and eating the insects flushed up by them.
Taxonomy
The family Bucerotidae was introduced (as Buceronia) by the French polymath Constantine Samuel Rafinesque in 1815; it comes from the genus name Buceros given by Carl Linnaeus in 1758 from the Greek word bōukeros which means "cow horn".
There are two subfamilies: the Bucorvinae contain the two ground hornbills in a single genus, and the Bucerotinae contain all other taxa. Traditionally they are included in the order Coraciiformes (which includes also kingfishers, rollers, hoopoes and bee-eaters). In the Sibley-Ahlquist taxonomy, however, hornbills are separated from the Coraciiformes into an order of their own, Bucerotiformes, with the subfamilies elevated to family level. Given that they are almost as distant from the rollers, kingfishers and allies as are the trogons, the arrangement chosen is more a matter of personal taste than any well-established taxonomic practice. All that can be said with reasonable certainty is that placing the hornbills outside the Coraciiformes and the trogons inside would be incorrect.
Genetic data suggests that ground hornbills and Bycanistes form a clade outside the rest of the hornbill lineage. They are thought to represent an early African lineage, while the rest of Bucerotiformes evolved in Asia. However, another study claims that the ground hornbills diverged first, followed by Tockus. Within Tockus, two clades have been identified based on genetics and vocal types—'whistlers' and 'cluckers'. The 'cluckers' have been placed in a separate genus, Lophoceros.
Bycanistes belongs to a clade of mostly African species that also includes Ceratogymna and Tropicranus. Another member of this clade is the Black dwarf hornbill. The Black dwarf hornbill is typically classified in the genus Tockus but in this study, is a sister species to the White-crested hornbill. If these two species are classified in congeneric, Tropicranus becomes a junior synonym of Horizocerus, as that was one of the old names used for the Black dwarf hornbill. This clade also includes one Southeast Asian species, the white-crowned hornbill.
As for the other Asian hornbill species, Buceros and Rhinoplax are each other's closest relatives, Anorrhinus is part of a clade that has Ocyceros and Anthracoceros as sister taxa, and Aceros, Rhyticeros, and Penelopides form another clade. However, according to this study, Aceros is polyphyletic; the rufous-headed hornbill, writhed hornbill, and wrinkled hornbill form a clade with the Sulawesi hornbill, and are in turn more closely related to Penelopides. These four species have been classified in a separate genus, Rhabdotorrhinus. Similarly, the knobbed hornbill is more closely related to Rhyticeros, leaving the rufous-necked hornbill the only member of the genus Aceros.
The following cladogram showing the relationships between the genera is based on a molecular phylogenetic study by Juan-Carlos Gonzalez and collaborators that was published in 2013. The number of species in each genus is taken from the list of world birds maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithologists' Union.
Fossil record
Bucorvus brailloni – Late Miocene (Morocco)
Euroceros bulgaricus – Late Miocene (Bulgaria)
Tockus sp. – Early Miocene (Uganda)
Some scientist believe the hornbill evolutionary tree spread from the Indian microcontinent after Gondwana, before India merged with Asia.
Cultural significance
Most species' casques are very light, containing much airspace. However, the helmeted hornbill has a solid casque made of a material called hornbill ivory, which is greatly valued as a carving material in China and Japan. It was used as a medium for the art of netsuke. Also used for hunting purposes in places like India. The Iban people of Borneo regards the Rhinoceros hornbill (known as Kenyalang) as the king of the worldly birds, who acts as the intermediary between the man and the God.
The Wreathed hornbill (Undan) is believed by the Iban people to be the guide of dead souls to the lower world.
Status and conservation
None of the African species of hornbills are seriously threatened, but many Asian hornbills are threatened by hunting and habitat loss, as they tend to require primary forest. Among these threatened species, only the plain-pouched hornbill and rufous-necked hornbill are found on the Asian mainland; all others are insular in their distribution. In the Philippines alone, the Palawan hornbill is vulnerable, and the Mindoro is endangered. The Visayan hornbill is classified as endangered by the IUCN, but considered critically endangered by the National List of Threatened Terrestrial Fauna of the Philippines. A subspecies of the Visayan hornbill, the Ticao hornbill, was declared extinct in 2013. Two of the three critically endangered hornbills, the rufous-headed hornbill and the Sulu hornbill, are also restricted to the Philippines. The latter species is one of the world's rarest birds, with only 20 breeding pairs or 40 mature individuals, and faces imminent extinction. The other critically endangered species, the helmeted hornbill, is threatened by uncontrolled hunting and the trade in hornbill ivory.
In popular culture
A hornbill named Zazu is the king's adviser and one of the characters in The Lion King franchise, voiced by Rowan Atkinson in the original 1994 version and John Oliver in the remade 2019 version.
Hornbill was used as the official mascot of one of Malaysia's political parties, the Democratic Action Party.
The Rhinoceros hornbill is the official state animal of Sarawak, a Malaysian state located in Borneo.
The great hornbill, a member of the hornbill family, is the official state bird of Kerala, an Indian state. The species is rated vulnerable.
| Biology and health sciences | Coraciiformes | null |
70024 | https://en.wikipedia.org/wiki/Maine%20Coon | Maine Coon | The Maine Coon is a large domesticated cat breed. One of the oldest natural breeds in North America, the breed originated in the U.S. state of Maine, where it is the official state cat.
The Maine Coon is a large and social cat, commonly referred to as "the gentle giant." The Maine Coon is predominantly known for its size and dense coat of fur which helps it survive in the harsh climate of Maine. The Maine Coon is often cited as having "dog-like" characteristics.
History
The Maine Coon is one of the largest domesticated cats. It has a distinctive physical appearance and valuable hunting skills. The breed was popular in cat shows in the late 19th century, but its existence became threatened when long-haired breeds from overseas were introduced in the early 20th century. The Maine Coon has since made a comeback, in 2023 the Maine Coon overtook the Exotic, becoming the second most popular pedigree cat breed in the world.
Origin
Myths
Maine Coon cats originated in Maine. However, their lineage is surrounded by mystery, folk tales, and myths. One myth claims the Maine Coon cat is a hybrid with another animal species, such as the raccoon or bobcat. The second myth states the cats are descendants of Viking ship's cats, known today as the Norwegian Forest cats. A third story involves Marie Antoinette, the Queen of France who was executed in 1793. The story goes that before her death, Antoinette attempted to escape France with the help of Captain Samuel Clough. She loaded Clough's ship with her most prized possessions, including six of her favorite Turkish Angora or possibly Siberian cats. Although she did not make it to the United States, all of her pets managed to reach the shore of Wiscasset, Maine, safely, where they bred with other short-haired breeds and developed into the modern breed of the Maine Coon.
Science
These myths and theories have long speculated that the long-haired Maine Coon cat has to be related to other long-haired breeds, due to their similarities in phenotype. For the Maine Coon in particular, that it is descended from the Norwegian or Siberian Forest cat, brought to New England by settlers or Vikings. Phylogenetic studies showed that the Maine Coon belongs to the Western European monophyletic cat branch, but forms the closest relationship with the random-bred cat population in the Northeastern US (New York region). This Western European branch contains the Norwegian and Siberian Forest cat, but they fall under a different sub-branch.
Maine Coons are descendants of cats brought to New England by Puritan settlers in the 1600-1700s, and out of the European cats they are genetically closest to cats found in the United Kingdom. It is not relatedness that makes them look similar to the Norwegian and Siberian Forest cats, but convergent evolution. These breeds all formed in harsh climates, in which natural selection pressures for similar qualities. Thick, long coats, toe and ear tufts, big bodies, and snowshoe-like big feet, are useful traits in all the harsh climates where these breeds originate from.
Cat shows and popularity
The first mention of Maine Coon cats in a literary work was in 1861, in Frances Simpson's The Book of the Cat (1903). F.R. Pierce, who owned several Maine Coons, wrote a chapter about the breed. During the late 1860s, farmers located in Maine told stories about their cats and held the "Maine State Champion Coon Cat" contest at the local Skowhegan Fair.
In 1895, a dozen Maine Coons were entered into a show in Boston. On 8 May 1895, the first North American cat show was hosted at Madison Square Garden in New York City. A female Maine Coon brown tabby, named Cosey, was entered into the show. Owned by Mrs. Fred Brown, Cosey won the silver collar and medal and was named Best in Show. The silver collar was purchased by the Cat Fanciers' Association (CFA) Foundation with the help of a donation from the National Capital Cat Show. The collar is housed at the CFA Central Office in the Jean Baker Rose Memorial Library.
In the early 20th century, the Maine Coon's popularity began to decline with the introduction of other long-haired breeds, such as the Persian, which originated in the Middle East. The last recorded win by a Maine Coon in a national cat show for over 40 years was in 1911 at a show in Portland, Oregon. The breed was rarely seen after that. The decline was so severe that the breed was declared extinct in the 1950s, although this declaration was considered to be exaggerated and reported prematurely at the time. The Central Maine Cat Club (CMCC) was created in the early 1950s by Ethylin Whittemore, Alta Smith, and Ruby Dyer in an attempt to increase the popularity of the Maine Coon. For 11 years, the CMCC held cat shows and hosted exhibitions of photographs of the breed and is noted for creating the first written breed standards for the Maine Coon.
The Maine Coon was denied provisional breed status—one of the three steps required for a breed not yet recognized by the CFA to be able to compete in championship competitions—by the CFA three times, which led to the formation of the Maine Coon Cat Club in 1973. The breed was accepted by the CFA under provisional status in May 1975, and was approved for championship status in May 1976. The next couple of decades saw a rise in the popularity of the Maine Coon, with championship victories and an increase in national rankings. In 1985, the state of Maine announced that the breed would be named the official state cat.
Description
Fur coat
The Maine Coon is a long- or medium-haired cat. The coat is soft and silky, although texture may vary with coat color. The length is shorter on the head and shoulders and longer on the stomach and flanks, with some cats having a leonine ruff around their neck. Minimal grooming is required for the breed compared to other long-haired breeds, as their double coat is mostly self-maintaining owing to a light-density undercoat. The coat is subject to seasonal variation, with the fur being thicker in the winter and thinner during the summer.
Maine Coons have several physical adaptations for survival in harsh winter climates. Their dense water-resistant fur is longer and shaggier on their underside and rear for extra protection when they are walking or sitting on top of wet surfaces of snow or ice. Their long and bushy raccoon-like tail is resistant to sinking in snow, and can be curled around their face and shoulders for warmth and protection from wind and blowing snow. It can even be curled around their backside like an insulated seat cushion when sitting down on a frozen surface.
Large paws facilitate walking on snow and are often compared to snowshoes. Long tufts of fur growing between their toes help keep the toes warm and further aid walking on snow by giving the paws additional structure without significant extra weight. Heavily furred ears with extra long tufts of fur growing from inside can keep warm more easily.
Coat colors
Maine Coons can have any colors that other cats have. Colors indicating crossbreeding, such as chocolate, lavender, the Siamese pointed patterns or the "ticked" patterns, are not accepted by some breed standards. This is not universal; the ticked pattern, for example, is accepted by TICA and CFA. The most common pattern seen in the breed is brown tabby. All eye colors are accepted under breed standards, with the exception of blue or odd-eyes, i.e. heterochromia iridium (two eyes of different colors), in cats possessing coat colors other than white.
Size
The Maine Coon was considered the largest breed of domestic cat until the introduction of the Savannah cat in the mid-1980s, yet it is still the largest non-hybrid breed. On average, males weigh from , with females weighing from . The height of adults can vary between and they can reach a length of up to , including the tail, which can reach a length of and is long, tapering, and heavily furred, almost resembling a raccoon's tail. The body is solid and muscular, which is necessary for supporting their weight, and the chest is broad. Maine Coons possess a rectangular body shape and are slow to physically mature; their full size is normally not reached until they are three to five years old, while other cats take about one year.
In 2010, the Guinness World Records accepted a male purebred Maine Coon named "Stewie" as the "Longest Cat", measuring from the tip of his nose to the tip of his tail. Stewie died on February 4, 2013, from cancer at his home in Reno, Nevada, at age 8. As of 2015 the living record-holder for "Longest Cat" is "Ludo", measuring . He lives in Wakefield, England, in the United Kingdom.
Large Maine Coons can overlap in length with Eurasian lynxes, although with a much lighter build and lower height.
Polydactylism
Many of the original Maine Coon cats that inhabited the New England area possessed a trait known as polydactylism (having one or more extra toes on a paw). With the 1970s revival of the interest in the breed, Maine Coon cats were noted to show an increased incidence of polydactylism compared to other breeds. Subsequently, breeders of show-standard cats were advised to regard this variation as undesirable and to offer affected kittens as household pets. The trait later became separately certified by some organizations, like The International Cat Association (TICA). Meanwhile, in increasing numbers of cat fancy competitions, the trait is no longer marked down.
Polydactylism is rarely, if ever, seen in Maine Coons in the show ring, since it is not allowed by competition standards. The gene for polydactylism is a simple autosomal dominant gene. The polydactylism results from genetic problems which are not encouraged for breeding. Polydactyly in Maine Coon cats is characterised by broad phenotypic diversity. Polydactyly not only affects digit number and conformation, but also carpus and tarsus conformation. The trait was almost eradicated from the breed due to the fact that it was an automatic disqualifier in show rings. Some private organizations and breeders were created in order to preserve polydactylism in Maine Coon cats.
Health
Life expectancy
Pet insurance data obtained from a study during years 2003–2006 in Sweden puts the median lifespan of the Maine Coon at > 12.5 years. 74% lived to 10 years or more and 54% lived to 12.5 years or more. A UK study found a life expectancy of 9.71 years compared to 11.74 years overall.
Heart
Hypertrophic cardiomyopathy (HCM) has been observed in Maine Coon populations. A mutation in the MYBPC3 gene found in Maine Coons has been associated with HCM.
Of all the Maine Coons tested for the MyBPC mutation at the Veterinary Cardiac Genetics Lab at the College of Veterinary Medicine at Washington State University, approximately one-third tested positive. Not all cats that test positive will have clinical signs of the disease, and some Maine Coon cats with clinical evidence of hypertrophic cardiomyopathy test negative for this mutation, strongly suggesting that a second mutation exists in the breed. The HCM prevalence was found to be 10.1% (95% CI 5.8 -14.3%) in this study. Early growth and nutrition, larger body size, and obesity may be environmental modifiers of genetic predisposition to HCM.
Kidney
Polycystic kidney disease (PKD) is an inherited condition in cats that causes multiple cysts (pockets of fluid) to form in the kidneys. These cysts are present from birth. Initially, they are very small, but they grow larger over time and may eventually disrupt kidney function, resulting in kidney failure. While renal cysts are observed with a low incidence in Maine Coons, PKD appears to be a misnomer in this particular breed. In a 2013 study, spanning 8 years, renal cysts were documented by ultrasound in 7 of 187 healthy Maine Coons enrolled in a pre-breeding screening programme. The cysts were mostly single and unilateral (6/7, 85.7%), small (mean 3.6 mm in diameter), and located at the corticomedullary junction (4/6, 66.7%). Thus, different in size, number, and location from those observed in Persian-related breeds. In the same study, all six Maine Coon cats with renal cysts tested negative for the PKD1 mutation, proving the disease in these cats to be unrelated to the PKD observed in Persians and related breeds. Gene sequencing of these cats failed to demonstrate any common genetic sequences. Gendron et al. found that 'Maine Coon PKD' represents a form of juvenile nephropathy other than PKD.
Skeletal, joint and muscle
Hip dysplasia is an abnormality of the hip joint which can cause crippling lameness and arthritis. The cats most commonly affected with hip dysplasia tend to be males of the larger, big-boned breeds such as Persians and Maine Coons. The relatively smaller size and weight of cats frequently results in symptoms that are less pronounced. X-rays submitted to the Orthopedic Foundation for Animals (OFA) between 1974 and 2011 indicates that 24.3% of Maine Coons in the database were dysplastic. Dysplasia was more severe in bilateral than unilateral cases and with increasing age.
The Maine Coon is one of the more commonly affected breeds for spinal muscular atrophy. An autosomal recessive mutation in both the LIX1 and LNPEP gene are responsible for the condition in the breed.
Other
Maine Coons also seem to be predisposed to develop entropion, mainly on the lateral aspect of the eyelids, which can lead to corneal irritation and ulceration, and may require surgery.
Gallery
| Biology and health sciences | Cats | Animals |
70048 | https://en.wikipedia.org/wiki/Rectangle | Rectangle | In Euclidean plane geometry, a rectangle is a rectilinear convex polygon or a quadrilateral with four right angles. It can also be defined as: an equiangular quadrilateral, since equiangular means that all of its angles are equal (360°/4 = 90°); or a parallelogram containing a right angle. A rectangle with four sides of equal length is a square. The term "oblong" is used to refer to a non-square rectangle. A rectangle with vertices ABCD would be denoted as .
The word rectangle comes from the Latin rectangulus, which is a combination of rectus (as an adjective, right, proper) and angulus (angle).
A crossed rectangle is a crossed (self-intersecting) quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals (therefore only two sides are parallel). It is a special case of an antiparallelogram, and its angles are not right angles and not all equal, though opposite angles are equal. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles with opposite sides equal in length and equal angles that are not right angles.
Rectangles are involved in many tiling problems, such as tiling the plane by rectangles or tiling a rectangle by polygons.
Characterizations
A convex quadrilateral is a rectangle if and only if it is any one of the following:
a parallelogram with at least one right angle
a parallelogram with diagonals of equal length
a parallelogram ABCD where triangles ABD and DCA are congruent
an equiangular quadrilateral
a quadrilateral with four right angles
a quadrilateral where the two diagonals are equal in length and bisect each other
a convex quadrilateral with successive sides a, b, c, d whose area is .
a convex quadrilateral with successive sides a, b, c, d whose area is
Classification
Traditional hierarchy
A rectangle is a special case of a parallelogram in which each pair of adjacent sides is perpendicular.
A parallelogram is a special case of a trapezium (known as a trapezoid in North America) in which both pairs of opposite sides are parallel and equal in length.
A trapezium is a convex quadrilateral which has at least one pair of parallel opposite sides.
A convex quadrilateral is
Simple: The boundary does not cross itself.
Star-shaped: The whole interior is visible from a single point, without crossing any edge.
Alternative hierarchy
De Villiers defines a rectangle more generally as any quadrilateral with axes of symmetry through each pair of opposite sides. This definition includes both right-angled rectangles and crossed rectangles. Each has an axis of symmetry parallel to and equidistant from a pair of opposite sides, and another which is the perpendicular bisector of those sides, but, in the case of the crossed rectangle, the first axis is not an axis of symmetry for either side that it bisects.
Quadrilaterals with two axes of symmetry, each through a pair of opposite sides, belong to the larger class of quadrilaterals with at least one axis of symmetry through a pair of opposite sides. These quadrilaterals comprise isosceles trapezia and crossed isosceles trapezia (crossed quadrilaterals with the same vertex arrangement as isosceles trapezia).
Properties
Symmetry
A rectangle is cyclic: all corners lie on a single circle.
It is equiangular: all its corner angles are equal (each of 90 degrees).
It is isogonal or vertex-transitive: all corners lie within the same symmetry orbit.
It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°).
Rectangle-rhombus duality
The dual polygon of a rectangle is a rhombus, as shown in the table below.
The figure formed by joining, in order, the midpoints of the sides of a rectangle is a rhombus and vice versa.
Miscellaneous
A rectangle is a rectilinear polygon: its sides meet at right angles.
A rectangle in the plane can be defined by five independent degrees of freedom consisting, for example, of three for position (comprising two of translation and one of rotation), one for shape (aspect ratio), and one for overall size (area).
Two rectangles, neither of which will fit inside the other, are said to be incomparable.
Formulae
If a rectangle has length and width , then:
it has area ;
it has perimeter ;
each diagonal has length ; and
when , the rectangle is a square.
Theorems
The isoperimetric theorem for rectangles states that among all rectangles of a given perimeter, the square has the largest area.
The midpoints of the sides of any quadrilateral with perpendicular diagonals form a rectangle.
A parallelogram with equal diagonals is a rectangle.
The Japanese theorem for cyclic quadrilaterals states that the incentres of the four triangles determined by the vertices of a cyclic quadrilateral taken three at a time form a rectangle.
The British flag theorem states that with vertices denoted A, B, C, and D, for any point P on the same plane of a rectangle:
For every convex body C in the plane, we can inscribe a rectangle r in C such that a homothetic copy R of r is circumscribed about C and the positive homothety ratio is at most 2 and .
There exists a unique rectangle with sides and , where is less than , with two ways of being folded along a line through its center such that the area of overlap is minimized and each area yields a different shapea triangle and a pentagon. The unique ratio of side lengths is .
Crossed rectangles
A crossed quadrilateral (self-intersecting) consists of two opposite sides of a non-self-intersecting quadrilateral along with the two diagonals. Similarly, a crossed rectangle is a crossed quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals. It has the same vertex arrangement as the rectangle. It appears as two identical triangles with a common vertex, but the geometric intersection is not considered a vertex.
A crossed quadrilateral is sometimes likened to a bow tie or butterfly, sometimes called an "angular eight". A three-dimensional rectangular wire frame that is twisted can take the shape of a bow tie.
The interior of a crossed rectangle can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise.
A crossed rectangle may be considered equiangular if right and left turns are allowed. As with any crossed quadrilateral, the sum of its interior angles is 720°, allowing for internal angles to appear on the outside and exceed 180°.
A rectangle and a crossed rectangle are quadrilaterals with the following properties in common:
Opposite sides are equal in length.
The two diagonals are equal in length.
It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°).
Other rectangles
In spherical geometry, a spherical rectangle is a figure whose four edges are great circle arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. The surface of a sphere in Euclidean solid geometry is a non-Euclidean surface in the sense of elliptic geometry. Spherical geometry is the simplest form of elliptic geometry.
In elliptic geometry, an elliptic rectangle is a figure in the elliptic plane whose four edges are elliptic arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length.
In hyperbolic geometry, a hyperbolic rectangle is a figure in the hyperbolic plane whose four edges are hyperbolic arcs which meet at equal angles less than 90°. Opposite arcs are equal in length.
Tessellations
The rectangle is used in many periodic tessellation patterns, in brickwork, for example, these tilings:
Squared, perfect, and other tiled rectangles
A rectangle tiled by squares, rectangles, or triangles is said to be a "squared", "rectangled", or "triangulated" (or "triangled") rectangle respectively. The tiled rectangle is perfect if the tiles are similar and finite in number and no two tiles are the same size. If two such tiles are the same size, the tiling is imperfect. In a perfect (or imperfect) triangled rectangle the triangles must be right triangles. A database of all known perfect rectangles, perfect squares and related shapes can be found at squaring.net. The lowest number of squares need for a perfect tiling of a rectangle is 9 and the lowest number needed for a perfect tilling a square is 21, found in 1978 by computer search.
A rectangle has commensurable sides if and only if it is tileable by a finite number of unequal squares. The same is true if the tiles are unequal isosceles right triangles.
The tilings of rectangles by other tiles which have attracted the most attention are those by congruent non-rectangular polyominoes, allowing all rotations and reflections. There are also tilings by congruent polyaboloes.
Unicode
The following Unicode code points depict rectangles:
U+25AC ▬ BLACK RECTANGLE
U+25AD ▭ WHITE RECTANGLE
U+25AE ▮ BLACK VERTICAL RECTANGLE
U+25AF ▯ WHITE VERTICAL RECTANGLE
| Mathematics | Two-dimensional space | null |
70117 | https://en.wikipedia.org/wiki/Floodplain | Floodplain | A floodplain or flood plain or bottomlands is an area of land adjacent to a river. Floodplains stretch from the banks of a river channel to the base of the enclosing valley, and experience flooding during periods of high discharge. The soils usually consist of clays, silts, sands, and gravels deposited during floods.
Because of regular flooding, floodplains frequently have high soil fertility since nutrients are deposited with the flood waters. This can encourage farming; some important agricultural regions, such as the Nile and Mississippi river basins, heavily exploit floodplains. Agricultural and urban regions have developed near or on floodplains to take advantage of the rich soil and freshwater. However, the risk of inundation has led to increasing efforts to control flooding.
Formation
Most floodplains are formed by deposition on the inside of river meanders and by overbank flow.
Wherever the river meanders, the flowing water erodes the river bank on the outside of the meander. At the same time, sediments are simultaneously deposited in a bar on the inside of the meander. This is described as lateral accretion since the deposition builds the point bar laterally into the river channel. Erosion on the outside of the meander usually closely balances deposition on the inside so that the channel shifts in the direction of the meander without changing significantly in width. The point bar is built up to a level very close to that of the river banks. Significant net erosion of sediments occurs only when the meander cuts into higher ground. The overall effect is that, as the river meanders, it creates a level flood plain composed mostly of point bar deposits. The rate at which the channel shifts varies greatly, with reported rates ranging from too slow to measure to as much as per year for the Kosi River of India.
Overbank flow takes place when the river is flooded with more water than can be accommodated by the river channel. Flow over the banks of the river deposits a thin veneer of sediments that is coarsest and thickest close to the channel. This is described as vertical accretion, since the deposits build upwards. In undisturbed river systems, overbank flow is frequent, typically occurring every one to two years, regardless of climate or topography. Sedimentation rates for a three-day flood of the Meuse and Rhine Rivers in 1993 found average sedimentation rates in the floodplain of between 0.57 and 1.0 kg/m2. Higher rates were found on the levees (4 kg/m2 or more) and on low-lying areas (1.6 kg/m2).
Sedimentation from the overbank flow is concentrated on natural levees, crevasse splays, and in wetlands and shallow lakes of flood basins. Natural levees are ridges along river banks that form from rapid deposition from the overbank flow. Most of the suspended sand is deposited on the levees, leaving the silt and clay sediments to be deposited as floodplain mud further from the river. Levees are typically built up enough to be relatively well-drained compared with nearby wetlands, and levees in non-arid climates are often heavily vegetated.
Crevasses are formed by breakout events from the main river channel. The river bank fails, and floodwaters scour a channel. Sediments from the crevasse spread out as delta-shaped deposits with numerous distributary channels. Crevasse formation is most common in sections of rivers where the river bed is accumulating sediments (aggrading).
Repeated flooding eventually builds up an alluvial ridge, whose natural levees and abandoned meander loops may stand well above most of the floodplain. The alluvial ridge is topped by a channel belt formed by successive generations of channel migration and meander cutoff. At much longer intervals, the river may abandon the channel belt and build a new one at another position on the floodplain. This process is called avulsion and occurs at intervals of 10–1000 years. Historical avulsions leading to catastrophic flooding include the 1855 Yellow River flood and the 2008 Kosi River flood.
Floodplains can form around rivers of any kind or size. Even relatively straight stretches of river are capable of producing floodplains. Mid-channel bars in braided rivers migrate downstream through processes resembling those in point bars of meandering rivers and can build up a floodplain.
The quantity of sediments in a floodplain greatly exceeds the river load of sediments. Thus, floodplains are an important storage site for sediments during their transport from where they are generated to their ultimate depositional environment.
When the rate at which the river is cutting downwards becomes great enough that overbank flows become infrequent, the river is said to have abandoned its floodplain. Portions of the abandoned floodplain may be preserved as fluvial terraces.
Ecology
Floodplains support diverse and productive ecosystems. They are characterized by considerable variability in space and time, which in turn produces some of the most species-rich of ecosystems. From the ecological perspective, the most distinctive aspect of floodplains is the flood pulse associated with annual floods, and so the floodplain ecosystem is defined as the part of the river valley that is regularly flooded and dried.
Floods bring in detrital material rich in nutrients and release nutrients from dry soil as it is flooded. The decomposition of terrestrial plants submerged by the floodwaters adds to the nutrient supply. The flooded littoral zone of the river (the zone closest to the river bank) provides an ideal environment for many aquatic species, so the spawning season for fish often coincides with the onset of flooding. Fish must grow quickly during the flood to survive the subsequent drop in water level. As the floodwaters recede, the littoral experiences blooms of microorganisms, while the banks of the river dry out and terrestrial plants germinate to stabilize the bank.
The biota of floodplains has high annual growth and mortality rates, which is advantageous for the rapid colonization of large areas of the floodplain. This allows them to take advantage of shifting floodplain geometry. For example, floodplain trees are fast-growing and tolerant of root disturbance. Opportunists (such as birds) are attracted to the rich food supply provided by the flood pulse.
Floodplain ecosystems have distinct biozones. In Europe, as one moves away from the river, the successive plant communities are bank vegetation (usually annuals); sedge and reeds; willow shrubs; willow-poplar forest; oak-ash forest; and broadleaf forest. Human disturbance creates wet meadows that replace much of the original ecosystem. The biozones reflect a soil moisture and oxygen gradient that in turn corresponds to a flooding frequency gradient. The primeval floodplain forests of Europe were dominated by oak (60%) elm (20%) and hornbeam (13%), but human disturbance has shifted the makeup towards ash (49%) with maple increasing to 14% and oak decreasing to 25%.
Semiarid floodplains have a much lower species diversity. Species are adapted to alternating drought and flood. Extreme drying can destroy the ability of the floodplain ecosystem to shift to a healthy wet phase when flooded.
Floodplain forests constituted 1% of the landscape of Europe in the 1800s. Much of this has been cleared by human activity, though floodplain forests have been impacted less than other kinds of forests. This makes them important refugia for biodiversity. Human destruction of floodplain ecosystems is largely a result of flood control, hydroelectric development (such as reservoirs), and conversion of floodplains to agriculture use. Transportation and waste disposal also have detrimental effects. The result is the fragmentation of these ecosystems, resulting in loss of populations and diversity and endangering the remaining fragments of the ecosystem. Flood control creates a sharper boundary between water and land than in undisturbed floodplains, reducing physical diversity. Floodplain forests protect waterways from erosion and pollution and reduce the impact of floodwaters.
The disturbance by humans of temperate floodplain ecosystems frustrates attempts to understand their natural behavior. Tropical rivers are less impacted by humans and provide models for temperate floodplain ecosystems, which are thought to share many of their ecological attributes.
Flood control
Excluding famines and epidemics, some of the worst natural disasters in history (measured by fatalities) have been river floods, particularly in the Yellow River in China – see list of deadliest floods. The worst of these, and the worst natural disaster (excluding famine and epidemics), was the 1931 China floods, estimated to have killed millions. This had been preceded by the 1887 Yellow River flood, which killed around one million people and is the second-worst natural disaster in history.
The extent of floodplain inundation depends partly on flood magnitude, defined by the return period.
In the United States, the Federal Emergency Management Agency (FEMA) manages the National Flood Insurance Program (NFIP). The NFIP offers insurance to properties located within a flood-prone area, as defined by the Flood Insurance Rate Map (FIRM), which depicts various flood risks for a community. The FIRM typically focuses on the delineation of the 100-year flood inundation area, also known within the NFIP as the Special Flood Hazard Area.
Where a detailed study of a waterway has been done, the 100-year floodplain will also include the floodway, the critical portion of the floodplain which includes the stream channel and any adjacent areas that must be kept free of encroachments that might block flood flows or restrict storage of flood waters. Another commonly encountered term is the Special Flood Hazard Area, which is any area subject to inundation by a 100-year flood. A problem is that any alteration of the watershed upstream of the point in question can potentially affect the ability of the watershed to handle water, and thus potentially affects the levels of the periodic floods. A large shopping center and parking lot, for example, may raise the levels of 5-year, 100-year, and other floods, but the maps are rarely adjusted and are frequently rendered obsolete by subsequent development.
In order for a flood-prone property to qualify for government-subsidized insurance, a local community must adopt an ordinance that protects the floodway and requires that new residential structures built in Special Flood Hazard Areas be elevated to at least the level of the 100-year flood. Commercial structures can be elevated or floodproofed to or above this level. In some areas without detailed study information, structures may be required to be elevated to at least two feet above the surrounding grade. Many State and local governments have, in addition, adopted floodplain construction regulations which are more restrictive than those mandated by the NFIP. The US government also sponsors flood hazard mitigation efforts to reduce flood impacts. California's Hazard Mitigation Program is one funding source for mitigation projects. A number of whole towns such as English, Indiana, have been completely relocated to remove them from the floodplain. Other smaller-scale mitigation efforts include acquiring and demolishing flood-prone buildings or flood-proofing them.
In some floodplains, such as the Inner Niger Delta of Mali, annual flooding events are a natural part of the local ecology and rural economy, allowing for the raising of crops through recessional agriculture. However, in Bangladesh, which occupies the Ganges Delta, the advantages provided by the richness of the alluvial soil of the floodplain are severely offset by frequent floods brought on by cyclones and annual monsoon rains. These extreme weather events cause severe economic disruption and loss of human life in the densely-populated region.
Floodplain soils
Oxygen in floodplain soils
Floodplain soil composition is unique and varies widely based on microtopography. Floodplain forests have high topographic heterogeneity which creates variation in localized hydrologic conditions. Soil moisture within the upper 30 cm of the soil profile also varies widely based on microtopography, which affects oxygen availability. Floodplain soil stays aerated for long periods in between flooding events, but during flooding, saturated soil can become oxygen-depleted if it stands stagnant for long enough. More soil oxygen is available at higher elevations farther from the river. Floodplain forests generally experience alternating periods of aerobic and anaerobic soil microbe activity, affecting fine root development and desiccation.
Phosphorus cycling in floodplain soils
Floodplains have high buffering capacity for phosphorus to prevent nutrient loss to river outputs. Phosphorus nutrient loading is a problem in freshwater systems. Much of the phosphorus in freshwater systems comes from municipal wastewater treatment plants and agricultural runoff. Stream connectivity controls whether phosphorus cycling is mediated by floodplain sediments or by external processes. Under conditions of stream connectivity, phosphorus is better able to be cycled, and sediments and nutrients are more readily retained. Water in freshwater streams ends up in either short-term storage in plants or algae or long-term in sediments. Wet/dry cycling within the floodplain greatly impacts phosphorus availability because it alters water level, redox state, pH, and physical properties of minerals. Dry soils that were previously inundated have reduced availability of phosphorus and increased affinity for obtaining phosphorus. Human floodplain alterations also impact the phosphorus cycle. Particulate phosphorus and soluble reactive phosphorus (SRP) can contribute to algal blooms and toxicity in waterways when the nitrogen-to-phosphorus ratios are altered farther upstream. In areas where the phosphorus load is primarily particulate phosphorus, like the Mississippi River, the most effective ways of removing phosphorus upstream are sedimentation, soil accretion, and burial. In basins where SRP is the primary form of phosphorus, biological uptake in floodplain forests is the best way of removing nutrients. Phosphorus can transform between SRP and particulate phosphorus depending on ambient conditions or processes like decomposition, biological uptake, redoximorphic release, and sedimentation and accretion. In either phosphorus form, floodplain forests are beneficial as phosphorus sinks, and the human-caused disconnect between floodplains and rivers exacerbates the phosphorus overload.
Environmental pollutants in floodplain soils
Floodplain soils tend to be high in eco-pollutants, especially persistent organic pollutant (POP) deposition. Proper understanding of the distribution of soil contaminants is complex because of high variation in microtopography and soil texture within floodplains.
| Physical sciences | Fluvial landforms | null |
70144 | https://en.wikipedia.org/wiki/Cypriniformes | Cypriniformes | Cypriniformes is an order of ray-finned fish, which includes many families and genera of cyprinid (carps and their kin) fish, such as barbs, gobies, loaches, botias, and minnows (among others). Cypriniformes is an "order-within-an-order", placed under the superorder Ostariophysi—which is also made up of cyprinid, ostariophysin fishes. The order contains 11–12 families (with some authorities having listed as many as 23), over 400 genera, and more than 4,250 named species; new species are regularly described, and new genera are recognized frequently. Cyprinids are most diverse in South and Southeast Asia and are entirely absent from Australia and South America. At 112 years old, the longest-lived cypriniform fish documented is the bigmouth buffalo.
Their closest living relatives are the Characiformes (characins, tetras and their kin), the Gymnotiformes (electric eel and American knifefishes), and the Siluriformes (catfishes).
Description
Like other orders of the Ostariophysi, fishes of Cypriniformes possess a Weberian apparatus. They differ from most of their relatives in having only a dorsal fin on their backs; most other fishes of Ostariophysi have a small, fleshy adipose fin behind the dorsal fin. Other differences are the Cypriniformes' unique kinethmoid, a small median bone in the snout, and the lack of teeth in the mouth. Instead, they have convergent structures called pharyngeal teeth in the throat. While other groups of fish, such as cichlids, also possess pharyngeal teeth, the cypriniformes' teeth grind against a chewing pad on the base of the skull, rather than an upper pharyngeal jaw.
The most notable family placed here is the Cyprinidae (carps and minnows), which make up two-thirds of the order's diversity. This is one of the largest families of fish, and is widely distributed across Africa, Eurasia, and North America. Most species are strictly freshwater inhabitants, but some are found in brackish water, such as roach and bream. At least one species is found in saltwater, the Pacific redfin, Tribolodon brandtii. Brackish water and marine cyprinids are invariably anadromous, swimming upstream into rivers to spawn. Sometimes separated as family Psilorhynchidae, they seem to be specially adapted fishes of the Cyprinidae.
The Balitoridae and Gyrinocheilidae are families of mountain-stream fishes feeding on algae and small invertebrates. They are found only in tropical and subtropical Asia. While the former are a speciose group, the latter contain only a handful of species. The suckers (Catostomidae) are found in temperate North America and eastern Asia. These large fishes are similar to carps in appearance and ecology. Members of the Cobitidae are common across Eurasia and parts of North Africa. A midsized group like the suckers, they are rather similar to catfish in appearance and behaviour, feeding primarily off the substrate and equipped with barbels to help them locate food at night or in murky conditions. Fishes in the families Cobitidae, Balitoridae, Botiidae, and Gyrinocheilidae are called loaches, although the last do not seem to belong to the lineage of "true" loaches, but are related to the suckers.
Systematics
Historically, these included all the forms now placed in the superorder Ostariophysi except the catfish, which were placed in the order Siluriformes. By this definition, the Cypriniformes were paraphyletic, so recently, the orders Gonorhynchiformes, Characiformes, (characins and allies), and Gymnotiformes (knifefishes and electric eels) have been separated out to form their own monophyletic orders.
The families of Cypriniformes are traditionally divided into two suborders. Superfamily Cyprinioidea contains the carps and minnows (Cyprinidae) and also the mountain carps as the family Psilorhynchidae. In 2012, Maurice Kottelat reviewed the superfamily Cobitoidei and under his revision it now consists of the following families: hillstream loaches (Balitoridae), Barbuccidae, Botiidae, suckers (Catostomidae), true loaches (Cobitidae), Ellopostomatidae, Gastromyzontidae, sucking loaches (Gyrinocheilidae), stone loaches (Nemacheilidae), Serpenticobitidae, and long-finned loaches (Vaillantellidae).
Catostomoidea is usually treated as a junior synonym of the Cobitoidei, but it could be split off the Catostomidae and Gyrinocheilidae in a distinct superfamily; the Catostomoidea might be closer relatives of the carps and minnows than of the "true" loaches. While the Cyprinioidea seem more "primitive" than the loach-like forms, they were apparently successful enough never to shift from the original ecological niche of the basal Ostariophysi. Yet, from the ecomorphologically conservative main lineage apparently at least two major radiations branched off. These diversified from the lowlands into torrential river habitats, acquiring similar habitus and adaptations in the process.
The mountain carps are the highly apomorphic Cyprinidae, perhaps close to true carps (Cyprininae), or maybe to the danionins. While some details about the phylogenetic structures of this massively diverse family are known – e.g. that Cultrinae and Leuciscinae are rather close relatives and stand apart from Cyprininae – no good consensus exists yet on how the main lineages are interrelated. A systematic list, from the most ancient to the most modern lineages, can thus be given as:
Family †Jianghanichthyidae Liu, Chang, Wilson & Murray, 2015 (Paleocene to Eocene of China)
Suborder Gyrinocheiloidei Betancur-R, et al., 2017
Family Gyrinocheilidae Gill, 1907 (algae eaters)
Suborder Catostomoidei Betancur-R, et al., 2017
Family Catostomidae Agassiz, 1850 (suckers)
Suborder Cobitoidei Fitzinger, 1832
Family Botiidae Berg, 1940 (pointface loaches)
Family Vaillantellidae Nalbant & Bănărescu, 1977 (longfin loaches)
Family Cobitidae Swainson, 1838 (spined loaches)
Family Barbuccidae Kottelat, 2012 (scooter loaches)
Family Gastromyzontidae Fowler, 1905 (hillstream loaches)
Family Serpenticobitidae Kottelat, 2012 (snake loaches)
Family Balitoridae Swainson, 1839 (river loaches)
Family Ellopostomatidae Bohlen & Šlechtová, 2009 (square-head loaches)
Family Nemacheilidae Regan, 1911 (brook loaches)
Suborder Cyprinoidei Fitzinger, 1832
Family Paedocyprididae Mayden & W.J. Chen, 2010 (tiny carps)
Family Psilorhynchidae Hora, 1926 (mountain carps)
Family Cyprinidae Rafinesque, 1815 (carps)
Family Sundadanionidae Mayden & Chen, 2010 (tiny danios)
Family Danionidae Bleeker, 1863 (danionids)
Family Leptobarbidae Bleeker, 1864 (cigar barbs)
Family Xenocyprididae Günther, 1868 (East Asian minnows or sharpbellies)
Family Tincidae D. S. Jordan, 1878 (tenches)
Family Acheilognathidae Bleeker, 1863 (bitterlings)
Family Gobionidae Bleeker, 1863 (freshwater gudgeons)
Family Tanichthyidae Mayden & Chen, 2010 (mountain minnows)
Family Leuciscidae Bonaparte, 1835 (minnows)
Phylogeny
Phylogeny based on the work of the following works
Evolution
Cypriniformes include the most primitive of the Ostariophysi in the narrow sense (i.e. excluding the Gonorynchiformes). This is evidenced not only by physiological details, but also by their great distribution, which indicates they had the longest time to spread. The earliest that Cypriniformes might have diverged from Characiphysi (Characiformes and relatives) is thought to be about the Early Triassic, about 250 million years ago (mya). However, their divergence probably occurred only with the splitting-up of Pangaea in the Jurassic, maybe 160 million years ago (Mya). By 110 Mya, the plate tectonics evidence indicates that the Laurasian Cypriniformes must have been distinct from their Gondwanan relatives.
The Cypriniformes are thought to have originated in South-east Asia, where the most diversity of this group is found today. The alternative hypothesis is that they began in South America, similar to the other otophysans. If this were the case, they would have spread to Asia through Africa or North America before the continents split up, for these are purely freshwater fishes. As the Characiformes began to diversify and spread, they may have outcompeted South American basal cypriniforms in Africa, where more advanced cypriniforms survive and coexist with characiforms.
The earliest cypriniform fossils are already assignable to the living family Catostomidae; from the Paleocene of Alberta, they are roughly 60 million years old. During the Eocene (55–35 Mya), catostomids and cyprinids spread throughout Asia; the earliest members of the cyprinid subfamilies Barbinae and Danioninae are known from the Eocene Sangkarewang Formation of Indonesia, in addition to possibly Smilogastrinae and Labeoninae. The extinct family Jianghanichthyidae is known from the Eocene of China. In the Oligocene, around 30 Mya, advanced cyprinids began to outcompete catostomids wherever they were sympatric, causing a decline of the suckers. Cyprinids reached North America and Europe about the same time, and Africa in the early Miocene (some 23–20 Mya). The cypriniforms spread to North America through the Bering land bridge, which formed and disappeared again several times during the many millions of years of cypriniform evolution.
Relationship with humans
The Cyprinidae in particular are important in a variety of ways. Many species are important food fish, particularly in Europe and Asia. Some are also important as aquarium fish, of which the goldfish and koi are perhaps the most celebrated. The other families are of less commercial importance. The Catostomidae have some importance in angling, and some "loaches" are bred for the international aquarium fish trade.
Accidentally or deliberately introduced populations of common carp (Cyprinus carpio) and grass carp (Ctenopharyngodon idella) are found on all continents except Antarctica. In some cases, these exotic species have a negative impact on the environment. Carp in particular stir up the riverbed, reducing the clarity of the water, making plant growth difficult.
In science, one of the most famous members of the Cypriniformes is the zebrafish (Danio rerio). The zebrafish is one of the most important vertebrate model organisms in biological and biochemical sciences, being used in many kinds of experiments. During early development, the zebrafish has a nearly transparent body, so it is ideal for studying developmental biology. It is also used for the elucidation of biochemical signaling pathways. They are also good pets, but can be shy in bright light and crowded tanks.
Threats and extinction
Habitat destruction, damming of upland rivers, pollution, and in some cases overfishing for food or the pet trade have driven some Cypriniformes to the brink of extinction or even beyond. In particular, Cyprinidae of southwestern North America have been severely affected; a considerable number went entirely extinct after settlement by Europeans. For example, in 1900 the thicktail chub (Gila crassicauda) was the most common freshwater fish found in California; 70 years later, not a single living individual existed.
The well-known red-tailed black shark (Epalzeorhynchos bicolor) from the Mae Klong River of The Bridge on the River Kwai fame possibly only survives in captivity. Ironically, while pollution and other forms of overuse by humans have driven it from its native home, it is bred for the aquarium fish trade by the thousands. The Yarqon bleak (Acanthobrama telavivensis) from the Yarqon River had to be rescued into captivity from imminent extinction; new populations have apparently been established again successfully from captive stock. The Balitoridae and Cobitidae, meanwhile, contain a very large number of species about which essentially nothing is known except how they look and where they were first found.
Globally extinct Cypriniformes species are:
Acanthobrama hulensis
Gökçe balığı, Alburnus akili
Barbus microbarbis
Snake River sucker, Chasmistes muriei
Chondrostoma scodrense
Cyprinus yilongensis
Mexican dace, Evarra bustamantei
Plateau chub, Evarra eigenmanni
Endorheic chub, Evarra tlahuacensis
Thicktail chub, Gila crassicauda
Pahranagat spinedace, Lepidomeda altivelis
Harelip sucker, Moxostoma lacerum
Durango shiner, Notropis aulidion
Phantom shiner, Notropis orca
Salado shiner, Notropis saladonis
Clear Lake splittail, Pogonichthys ciscoides
Las Vegas dace, Rhinichthys deaconi
Stumptooth minnow, Stypodon signifer
Telestes ukliva
| Biology and health sciences | Cypriniformes | null |
70157 | https://en.wikipedia.org/wiki/Recycling | Recycling | Recycling is the process of converting waste materials into new materials and objects. This concept often includes the recovery of energy from waste materials. The recyclability of a material depends on its ability to reacquire the properties it had in its original state. It is an alternative to "conventional" waste disposal that can save material and help lower greenhouse gas emissions. It can also prevent the waste of potentially useful materials and reduce the consumption of fresh raw materials, reducing energy use, air pollution (from incineration) and water pollution (from landfilling).
Recycling is a key component of modern waste reduction and is the third component of the "Reduce, Reuse, and Recycle" waste hierarchy. It promotes environmental sustainability by removing raw material input and redirecting waste output in the economic system. There are some ISO standards related to recycling, such as ISO 15270:2008 for plastics waste and ISO 14001:2015 for environmental management control of recycling practice.
Recyclable materials include many kinds of glass, paper, cardboard, metal, plastic, tires, textiles, batteries, and electronics. The composting and other reuse of biodegradable waste—such as food and garden waste—is also a form of recycling. Materials for recycling are either delivered to a household recycling center or picked up from curbside bins, then sorted, cleaned, and reprocessed into new materials for manufacturing new products.
In ideal implementations, recycling a material produces a fresh supply of the same material—for example, used office paper would be converted into new office paper, and used polystyrene foam into new polystyrene. Some types of materials, such as metal cans, can be remanufactured repeatedly without losing their purity. With other materials, this is often difficult or too expensive (compared with producing the same product from raw materials or other sources), so "recycling" of many products and materials involves their reuse in producing different materials (for example, paperboard). Another form of recycling is the salvage of constituent materials from complex products, due to either their intrinsic value (such as lead from car batteries and gold from printed circuit boards), or their hazardous nature (e.g. removal and reuse of mercury from thermometers and thermostats).
History
Origins
Reusing materials has been a common practice for most of human history with recorded advocates as far back as Plato in the fourth century BC. During periods when resources were scarce, archaeological studies of ancient waste dumps show less household waste (such as ash, broken tools, and pottery), implying that more waste was recycled in place of new material. However, archaeological artefacts made from recyclable material, such as glass or metal, may neither be the original object nor resemble it, with the consequence that a successful ancient recycling economy can become invisible when recycling is synonymous with re-melting rather than reuse.
In pre-industrial times, there is evidence of scrap bronze and other metals being collected in Europe and melted down for continuous reuse. Paper recycling was first recorded in 1031 when Japanese shops sold repulped paper. In Britain dust and ash from wood and coal fires was collected by "dustmen" and downcycled as a base material for brick making. These forms of recycling were driven by the economic advantage of obtaining recycled materials instead of virgin material, and the need for waste removal in ever-more-densely populated areas. In 1813, Benjamin Law developed the process of turning rags into "shoddy" and "mungo" wool in Batley, Yorkshire, which combined recycled fibers with virgin wool. The West Yorkshire shoddy industry in towns such as Batley and Dewsbury lasted from the early 19th century to at least 1914.
Industrialization spurred demand for affordable materials. In addition to rags, ferrous scrap metals were coveted as they were cheaper to acquire than virgin ore. Railroads purchased and sold scrap metal in the 19th century, and the growing steel and automobile industries purchased scrap in the early 20th century. Many secondary goods were collected, processed and sold by peddlers who scoured dumps and city streets for discarded machinery, pots, pans, and other sources of metal. By World War I, thousands of such peddlers roamed the streets of American cities, taking advantage of market forces to recycle post-consumer materials into industrial production.
Manufacturers of beverage bottles, including Schweppes, began offering refundable recycling deposits in Great Britain and Ireland around 1800. An official recycling system with refundable deposits for bottles was established in Sweden in 1884, and for aluminum beverage cans in 1982; it led to recycling rates of 84–99%, depending on type (glass bottles can be refilled around 20 times).
Wartime
New chemical industries created in the late 19th century both invented new materials (e.g. Bakelite in 1907) and promised to transform valueless into valuable materials. Proverbially, you could not make a silk purse of a sow's ear—until the US firm Arthur D. Little published in 1921 "On the Making of Silk Purses from Sows' Ears", its research proving that when "chemistry puts on overalls and gets down to business [...] new values appear. New and better paths are opened to reach the goals desired."
Recycling—or "salvage", as it was then usually known—was a major issue for governments during World War II, where financial constraints and significant material shortages made it necessary to reuse goods and recycle materials. These resource shortages caused by the world wars, and other such world-changing events, greatly encouraged recycling. It became necessary for most homes to recycle their waste, allowing people to make the most of what was available. Recycling household materials also meant more resources were left available for war efforts. Massive government campaigns, such as the National Salvage Campaign in Britain and the Salvage for Victory campaign in the United States, occurred in every fighting nation, urging citizens to donate metal, paper, rags, and rubber as a patriotic duty.
Post-World War II
A considerable investment in recycling occurred in the 1970s due to rising energy costs. Recycling aluminium uses only 5% of the energy of virgin production. Glass, paper and other metals have less dramatic but significant energy savings when recycled.
Although consumer electronics have been popular since the 1920s, recycling them was almost unheard of until early 1991. The first electronic waste recycling scheme was implemented in Switzerland, beginning with collection of old refrigerators, then expanding to cover all devices. When these programs were created, many countries could not deal with the sheer quantity of e-waste, or its hazardous nature, and began to export the problem to developing countries without enforced environmental legislation. (For example, recycling computer monitors in the United States costs 10 times more than in China.) Demand for electronic waste in Asia began to grow when scrapyards found they could extract valuable substances such as copper, silver, iron, silicon, nickel, and gold during the recycling process. The 2000s saw a boom in both the sales of electronic devices and their growth as a waste stream: In 2002, e-waste grew faster than any other type of waste in the EU. This spurred investment in modern automated facilities to cope with the influx, especially after strict laws were implemented in 2003.
As of 2014, the European Union had about 50% of world share of waste and recycling industries, with over companies employing people and a turnover of €24 billion. EU countries are mandated to reach recycling rates of at least 50%; leading countries are already at around 65%. The overall EU average was 39% in 2013
and is rising steadily, to 45% in 2015.
In 2015, the United Nations General Assembly set 17 Sustainable Development Goals. Goal 12, Responsible Consumption and Production, specifies 11 targets "to ensure sustainable consumption and production patterns". The fifth target, Target 12.5, is defined as substantially reducing waste generation by 2030, indicated by the National Recycling Rate.
In 2018, changes in the recycling industry have sparked a global "crisis". On 31 December 2017, China announced its "National Sword" policy, setting new standards for imports of recyclable material and banning materials deemed too "dirty" or "hazardous". The new policy caused drastic disruptions in the global recycling market, and reduced the prices of scrap plastic and low-grade paper. Exports of recyclable materials from G7 countries to China dropped dramatically, with many shifting to countries in southeast Asia. This generated significant concern about the recycling industry's practices and environmental sustainability. The abrupt shift caused countries to accept more materials than they could process, and raised fundamental questions about shipping waste from developed countries to countries with few environmental regulations—a practice that predated the crisis.
Health and environmental impact
Health impact
E-waste
According to the WHO (2023), “Every year millions of electrical and electronic devices are discarded ... a threat to the environment and to human health if they are not treated, disposed of, and recycled appropriately. Common items ... include computers ... e-waste are recycled using environmentally unsound techniques and are likely stored in homes and warehouses, dumped, exported or recycled under inferior conditions. When e-waste is treated using inferior activities, it can release as many as 1000 different chemical substances ... including harmful neurotoxicants such as lead.” A paper in the journal Sustainable Materials & Technologies remarks upon the difficulty of managing e-waste, particularly from home automation products, which, due to their becoming obsolete at a high rate, are putting increasing strain on recycling systems, which have not adapted to meet the recycling needs posed by this type of product.
Slag recycling
Copper slag is obtained when copper and nickel ores are recovered from their source ores using a pyrometallurgical process, and these ores usually contain other elements which include iron, cobalt, silica, and alumina. An estimate of 2.2–3 tons of copper slag is generated per ton of copper produced, resulting in around 24.6 tons of slag per year, which is regarded as waste.
Environmental impact of slag include copper paralysis, which leads to death due to gastric hemorrhage, if ingested by humans. It may also cause acute dermatitis upon skin exposure. Toxicity may also be uptaken by crops through soil, consequently spreading animals and food sources and increasing the risk of cardiovascular diseases, cancer, cognitive impairment, chronic anemia, and damage to kidneys, bones, nervous system, brain and skin.
Substituting gravel and grit in quarries has been more cost-effective, due to having its sources with better proximity to consumer markets. Trading between countries and establishment of blast furnaces is helping increase slag utilization, hence reducing wastage and pollution.
Concrete recycling
Environmental impact
Economist Steven Landsburg, author of a paper entitled "Why I Am Not an Environmentalist", claimed that paper recycling actually reduces tree populations. He argues that because paper companies have incentives to replenish their forests, large demands for paper lead to large forests while reduced demand for paper leads to fewer "farmed" forests.
When foresting companies cut down trees, more are planted in their place; however, such farmed forests are inferior to natural forests in several ways. Farmed forests are not able to fix the soil as quickly as natural forests. This can cause widespread soil erosion and often requiring large amounts of fertilizer to maintain the soil, while containing little tree and wild-life biodiversity compared to virgin forests. Also, the new trees planted are not as big as the trees that were cut down, and the argument that there would be "more trees" is not compelling to forestry advocates when they are counting saplings.
In particular, wood from tropical rainforests is rarely harvested for paper because of their heterogeneity. According to the United Nations Framework Convention on Climate Change secretariat, the overwhelming direct cause of deforestation is subsistence farming (48% of deforestation) and commercial agriculture (32%), which is linked to food, not paper production.
Other non-conventional methods of material recycling, like Waste-to-Energy (WTE) systems, have garnered increased attention in the recent past due to the polarizing nature of their emissions. While viewed as a sustainable method of capturing energy from material waste feedstocks by many, others have cited numerous explanations for why the technology has not been scaled globally.
Legislation
Supply
For a recycling program to work, a large, stable supply of recyclable material is crucial. Three legislative options have been used to create such supplies: mandatory recycling collection, container deposit legislation, and refuse bans. Mandatory collection laws set recycling targets for cities, usually in the form that a certain percentage of a material must be diverted from the city's waste stream by a target date. The city is responsible for working to meet this target.
Container deposit legislation mandates refunds for the return of certain containers—typically glass, plastic and metal. When a product in such a container is purchased, a small surcharge is added that the consumer can reclaim when the container is returned to a collection point. These programs have succeeded in creating an average 80% recycling rate. Despite such good results, the shift in collection costs from local government to industry and consumers has created strong opposition in some areas—for example, where manufacturers bear the responsibility for recycling their products. In the European Union, the WEEE Directive requires producers of consumer electronics to reimburse the recyclers' costs.
An alternative way to increase the supply of recyclates is to ban the disposal of certain materials as waste, often including used oil, old batteries, tires, and garden waste. This can create a viable economy for the proper disposal of the products. Care must be taken that enough recycling services exist to meet the supply, or such bans can create increased illegal dumping.
Government-mandated demand
Four forms of legislation have also been used to increase and maintain the demand for recycled materials: minimum recycled content mandates, utilization rates, procurement policies, and recycled product labeling.
Both minimum recycled content mandates and utilization rates increase demand by forcing manufacturers to include recycling in their operations. Content mandates specify that a certain percentage of a new product must consist of recycled material. Utilization rates are a more flexible option: Industries can meet their recycling targets at any point of their operations, or even contract out recycling in exchange for tradable credits. Opponents to these methods cite their large increase in reporting requirements, and claim that they rob the industry of flexibility.
Governments have used their own purchasing power to increase recycling demand through "procurement policies". These policies are either "set-asides", which reserve a certain amount of spending for recycled products; or "price preference" programs that provide larger budgets when recycled items are purchased. Additional regulations can target specific cases: in the United States, for example, the Environmental Protection Agency mandates the purchase of oil, paper, tires and building insulation from recycled or re-refined sources whenever possible.
The final government regulation toward increased demand is recycled product labeling. When producers are required to label their packaging with the amount of recycled material it contains (including the packaging), consumers can make more educated choices. Consumers with sufficient buying power can choose more environmentally conscious options, prompting producers to increase the recycled material in their products and increase demand. Standardized recycling labeling can also have a positive effect on the supply of recyclates when it specifies how and where the product can be recycled.
Recyclates
"Recyclate" is a raw material sent to and processed in a waste recycling plant or materials-recovery facility so it can be used in the production of new materials and products. For example, plastic bottles can be made into plastic pellets and synthetic fabrics.
Quality of recyclate
The quality of recyclates is one of the principal challenges for the success of a long-term vision of a green economy and achieving zero waste. It generally refers to how much of it is composed of target material, versus non-target material and other non-recyclable material. Steel and other metals have intrinsically higher recyclate quality; it is estimated that two-thirds of all new steel comes from recycled steel. Only target material is likely to be recycled, so higher amounts of non-target and non-recyclable materials can reduce the quantity of recycled products. A high proportion of non-target and non-recyclable material can make it more difficult to achieve "high-quality" recycling; and if recyclate is of poor quality, it is more likely to end up being down-cycled or, in more extreme cases, sent to other recovery options or landfilled. For example, to facilitate the remanufacturing of clear glass products, there are tight restrictions for colored glass entering the re-melt process. Another example is the downcycling of plastic, where products such as plastic food packaging are often downcycled into lower quality products, and do not get recycled into the same plastic food packaging.
The quality of recyclate not only supports high-quality recycling, but it can also deliver significant environmental benefits by reducing, reusing, and keeping products out of landfills. High-quality recycling can support economic growth by maximizing the value of waste material. Higher income levels from the sale of quality recyclates can return value significant to local governments, households and businesses. Pursuing high-quality recycling can also promote consumer and business confidence in the waste and resource management sector, and may encourage investment in it.
There are many actions along the recycling supply chain, each of which can affect recyclate quality. Waste producers who place non-target and non-recyclable wastes in recycling collections can affect the quality of final recyclate streams, and require extra efforts to discard those materials at later stages in the recycling process. Different collection systems can induce different levels of contamination. When multiple materials are collected together, extra effort is required to sort them into separate streams and can significantly reduce the quality of the final products. Transportation and the compaction of materials can also make this more difficult. Despite improvements in technology and quality of recyclate, sorting facilities are still not 100% effective in separating materials. When materials are stored outside, where they can become wet, can also cause problems for re-processors. Further sorting steps may be required to satisfactorily reduce the amount of non-target and non-recyclable material.
Recycling consumer waste
Collection
A number of systems have been implemented to collect recyclates from the general waste stream, occupying different places on the spectrum of trade-off between public convenience and government ease and expense. The three main categories of collection are drop-off centers, buy-back centers and curbside collection. About two-thirds of the cost of recycling is incurred in the collection phase.
Curbside collection
Curbside collection encompasses many subtly different systems, which differ mostly on where in the process the recyclates are sorted and cleaned. The main categories are mixed waste collection, commingled recyclables, and source separation. A waste collection vehicle generally picks up the waste.
In mixed waste collection, recyclates are collected mixed with the rest of the waste, and the desired materials are sorted out and cleaned at a central sorting facility. This results in a large amount of recyclable waste (especially paper) being too soiled to reprocess, but has advantages as well: The city need not pay for the separate collection of recyclates, no public education is needed, and any changes to the recyclability of certain materials are implemented where sorting occurs.
In a commingled or single-stream system, recyclables are mixed but kept separate from non-recyclable waste. This greatly reduces the need for post-collection cleaning, but requires public education on what materials are recyclable.
Source separation
Source separation is the other extreme, where each material is cleaned and sorted prior to collection. It requires the least post-collection sorting and produces the purest recyclates. However, it incurs additional operating costs for collecting each material, and requires extensive public education to avoid recyclate contamination. In Oregon, USA, Oregon DEQ surveyed multi-family property managers; about half of them reported problems, including contamination of recyclables due to trespassers such as transients gaining access to collection areas.
Source separation used to be the preferred method due to the high cost of sorting commingled (mixed waste) collection. However, advances in sorting technology have substantially lowered this overhead, and many areas that had developed source separation programs have switched to what is called co-mingled collection.
Buy-back centers
At buy-back centers, separated, cleaned recyclates are purchased, providing a clear incentive for use and creating a stable supply. The post-processed material can then be sold. If profitable, this conserves the emission of greenhouse gases; if unprofitable, it increases their emission. Buy-back centres generally need government subsidies to be viable. According to a 1993 report by the U.S. National Waste & Recycling Association, it costs an average $50 to process a ton of material that can be resold for $30.
Drop-off centers
Drop-off centers require the waste producer to carry recyclates to a central location—either an installed or mobile collection station or the reprocessing plant itself. They are the easiest type of collection to establish but suffer from low and unpredictable throughput.
Distributed recycling
For some waste materials such as plastic, recent technical devices called recyclebots enable a form of distributed recycling called DRAM (distributed recycling additive manufacturing). Preliminary life-cycle analysis (LCA) indicates that such distributed recycling of HDPE to make filament for 3D printers in rural regions consumes less energy than using virgin resin, or using conventional recycling processes with their associated transportation.
Another form of distributed recycling mixes waste plastic with sand to make bricks in Africa. Several studies have looked at the properties of recycled waste plastic and sand bricks. The composite pavers can be sold at 100% profit while employing workers at 1.5× the minimum wage in the West African region, where distributed recycling has the potential to produce 19 million pavement tiles from 28,000 tons of plastic water sachets annually in Ghana, Nigeria, and Liberia. This has also been done with COVID19 masks.
Sorting
Once commingled recyclates are collected and delivered to a materials recovery facility, the materials must be sorted. This is done in a series of stages, many of which involve automated processes, enabling a truckload of material to be fully sorted in less than an hour. Some plants can now sort materials automatically; this is known as single-stream recycling. Automatic sorting may be aided by robotics and machine learning. In plants, a variety of materials is sorted including paper, different types of plastics, glass, metals, food scraps, and most types of batteries. A 30% increase in recycling rates has been seen in areas with these plants. In the US, there are over 300 materials recovery facilities.
Initially, commingled recyclates are removed from the collection vehicle and placed on a conveyor belt spread out in a single layer. Large pieces of corrugated fiberboard and plastic bags are removed by hand at this stage, as they can cause later machinery to jam.
Next, automated machinery such as disk screens and air classifiers separate the recyclates by weight, splitting lighter paper and plastic from heavier glass and metal. Cardboard is removed from mixed paper, and the most common types of plastic—PET (#1) and HDPE (#2)—are collected, so these materials can be diverted into the proper collection channels. This is usually done by hand; but in some sorting centers, spectroscopic scanners are used to differentiate between types of paper and plastic based on their absorbed wavelengths. Plastics tend to be incompatible with each other due to differences in chemical composition; their polymer molecules repel each other, similar to oil and water.
Strong magnets are used to separate out ferrous metals such as iron, steel and tin cans. Non-ferrous metals are ejected by magnetic eddy currents: A rotating magnetic field induces an electric current around aluminum cans, creating an eddy current inside the cans that is repulsed by a large magnetic field, ejecting the cans from the stream.
Finally, glass is sorted according to its color: brown, amber, green, or clear. It may be sorted either by hand, or by a machine that uses colored filters to detect colors. Glass fragments smaller than cannot be sorted automatically, and are mixed together as "glass fines".
In 2003, San Francisco's Department of the Environment set a citywide goal of zero waste by 2020. San Francisco's refuse hauler, Recology, operates an effective recyclables sorting facility that has helped the city reach a record-breaking landfill diversion rate of 80% as of 2021. Other American cities, including Los Angeles, have achieved similar rates.
Recycling industrial waste
Although many government programs concentrate on recycling at home, 64% of waste in the United Kingdom is generated by industry. The focus of many recycling programs in industry is their cost-effectiveness. The ubiquitous nature of cardboard packaging makes cardboard a common waste product recycled by companies that deal heavily in packaged goods, such as retail stores, warehouses, and goods distributors. Other industries deal in niche and specialized products, depending on the waste materials they handle.
Glass, lumber, wood pulp and paper manufacturers all deal directly in commonly recycled materials; however, independent tire dealers may collect and recycle rubber tires for a profit.
The waste produced from burning coal in a Coal-fired power station is often called fuel ash or fly ash in the United States. It is a very useful material and used in concrete construction. It exhibits Pozzolanic activity.
Levels of metals recycling are generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme (UNEP), published reports on metal stocks and their recycling rates. It reported that the increase in the use of metals during the 20th and into the 21st century has led to a substantial shift in metal stocks from below-ground to use in above-ground applications within society. For example, in the US, in-use copper grew from 73 to 238 kg per capita between 1932–1999.
The report's authors observed that, as metals are inherently recyclable, metal stocks in society can serve as huge above-ground mines (the term "urban mining" has thus been coined). However, they found that the recycling rates of many metals are low. They warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars and fuel cells, are so low that unless future end-of-life recycling rates are dramatically increased, these critical metals will become unavailable for use in modern technology.
The military recycles some metals. The U.S. Navy's Ship Disposal Program uses ship breaking to reclaim the steel of old vessels. Ships may also be sunk to create artificial reefs. Uranium is a dense metal that has qualities superior to lead and titanium for many military and industrial uses. Uranium left over from processing it into nuclear weapons and fuel for nuclear reactors is called depleted uranium, and is used by all branches of the U.S. military for the development of such things as armor-piercing shells and shielding.
The construction industry may recycle concrete and old road surface pavement, selling these materials for profit.
Some rapidly growing industries, particularly the renewable energy and solar photovoltaic technology industries, are proactively creating recycling policies even before their waste streams have considerable volume, anticipating future demand.
Recycling of plastics is more difficult, as most programs are not able to reach the necessary level of quality. Recycling of PVC often results in downcycling of the material, which means only products of lower quality standard can be made with the recycled material.
E-waste is a growing problem, accounting for 20–50 million metric tons of global waste per year according to the EPA. It is also the fastest growing waste stream in the EU. Many recyclers do not recycle e-waste responsibly. After the cargo barge Khian Sea dumped 14,000 metric tons of toxic ash in Haiti, the Basel Convention was formed to stem the flow of hazardous substances into poorer countries. They created the e-Stewards certification to ensure that recyclers are held to the highest standards for environmental responsibility and to help consumers identify responsible recyclers. It operates alongside other prominent legislation, such as the Waste Electrical and Electronic Equipment Directive of the EU and the United States National Computer Recycling Act, to prevent poisonous chemicals from entering waterways and the atmosphere.
In the recycling process, television sets, monitors, cell phones, and computers are typically tested for reuse and repaired. If broken, they may be disassembled for parts still having high value if labor is cheap enough. Other e-waste is shredded to pieces roughly in size and manually checked to separate toxic batteries and capacitors, which contain poisonous metals. The remaining pieces are further shredded to particles and passed under a magnet to remove ferrous metals. An eddy current ejects non-ferrous metals, which are sorted by density either by a centrifuge or vibrating plates. Precious metals can be dissolved in acid, sorted, and smelted into ingots. The remaining glass and plastic fractions are separated by density and sold to re-processors. Television sets and monitors must be manually disassembled to remove lead from CRTs and the mercury backlight from LCDs.
Vehicles, solar panels and wind turbines can also be recycled. They often contain rare-earth elements (REE) and/or other critical raw materials. For electric car production, large amounts of REE's are typically required.
Whereas many critical raw elements and REE's can be recovered, environmental engineer Phillipe Bihouix reports that recycling of indium, gallium, germanium, selenium, and tantalum is still very difficult and their recycling rates are very low.
Plastic recycling
Plastic recycling is the process of recovering scrap or waste plastic and reprocessing the material into useful products, sometimes completely different in form from their original state. For instance, this could mean melting down soft drink bottles and then casting them as plastic chairs and tables. For some types of plastic, the same piece of plastic can only be recycled about 2–3 times before its quality decreases to the point where it can no longer be used.
Physical recycling
Some plastics are remelted to form new plastic objects; for example, PET water bottles can be converted into polyester destined for clothing. A disadvantage of this type of recycling is that the molecular weight of the polymer can change further and the levels of unwanted substances in the plastic can increase with each remelt.
A commercial-built recycling facility was sent to the International Space Station in late 2019. The facility takes in plastic waste and unneeded plastic parts and physically converts them into spools of feedstock for the space station additive manufacturing facility used for in-space 3D printing.
Chemical recycling
For some polymers, it is possible to convert them back into monomers, for example, PET can be treated with an alcohol and a catalyst to form a dialkyl terephthalate. The terephthalate diester can be used with ethylene glycol to form a new polyester polymer, thus making it possible to use the pure polymer again. In 2019, Eastman Chemical Company announced initiatives of methanolysis and syngas designed to handle a greater variety of used material.
Waste plastic pyrolysis to fuel oil
Another process involves the conversion of assorted polymers into petroleum by a much less precise thermal depolymerization process. Such a process would be able to accept almost any polymer or mix of polymers, including thermoset materials such as vulcanized rubber tires and the biopolymers in feathers and other agricultural waste. Like natural petroleum, the chemicals produced can be used as fuels or as feedstock. A RESEM Technology plant of this type in Carthage, Missouri, US, uses turkey waste as input material. Gasification is a similar process but is not technically recycling since polymers are not likely to become the result.
Plastic Pyrolysis can convert petroleum based waste streams such as plastics into quality fuels, carbons. Given below is the list of suitable plastic raw materials for pyrolysis:
Mixed plastic (HDPE, LDPE, PE, PP, Nylon, Teflon, PS, ABS, FRP, PET etc.)
Mixed waste plastic from waste paper mill
Multi-layered plastic
Recycling codes
In order to meet recyclers' needs while providing manufacturers a consistent, uniform system, a coding system was developed. The recycling code for plastics was introduced in 1988 by the plastics industry through the Society of the Plastics Industry. Because municipal recycling programs traditionally have targeted packaging—primarily bottles and containers—the resin coding system offered a means of identifying the resin content of bottles and containers commonly found in the residential waste stream.
In the United States, plastic products are printed with numbers 1–7 depending on the type of resin. Type 1 (polyethylene terephthalate) is commonly found in soft drink and water bottles. Type 2 (high-density polyethylene) is found in most hard plastics such as milk jugs, laundry detergent bottles, and some dishware. Type 3 (polyvinyl chloride) includes items such as shampoo bottles, shower curtains, hula hoops, credit cards, wire jacketing, medical equipment, siding, and piping. Type 4 (low-density polyethylene) is found in shopping bags, squeezable bottles, tote bags, clothing, furniture, and carpet. Type 5 is polypropylene and makes up syrup bottles, straws, Tupperware, and some automotive parts. Type 6 is polystyrene and makes up meat trays, egg cartons, clamshell containers, and compact disc cases. Type 7 includes all other plastics such as bulletproof materials, 3- and 5-gallon water bottles, cell phone and tablet frames, safety goggles and sunglasses. Having a recycling code or the chasing arrows logo on a material is not an automatic indicator that a material is recyclable but rather an explanation of what the material is. Types 1 and 2 are the most commonly recycled.
Cost–benefit analysis
In addition to environmental impact, there is debate over whether recycling is economically efficient. According to a Natural Resources Defense Council study, waste collection and landfill disposal creates less than one job per 1,000 tons of waste material managed; in contrast, the collection, processing, and manufacturing of recycled materials creates 6–13 or more jobs per 1,000 tons. According to the U.S. Recycling Economic Informational Study, there are over 50,000 recycling establishments that have created over a million jobs in the US. The National Waste & Recycling Association (NWRA) reported in May 2015 that recycling and waste made a $6.7 billion economic impact in Ohio, U.S., and employed 14,000 people. Economists would classify this extra labor used as a cost rather than a benefit since these workers could have been employed elsewhere; the cost effectiveness of creating these additional jobs remains unclear.
Sometimes cities have found recycling saves resources compared to other methods of disposal of waste. Two years after New York City declared that implementing recycling programs would be "a drain on the city", New York City leaders realized that an efficient recycling system could save the city over $20 million. Municipalities often see fiscal benefits from implementing recycling programs, largely due to the reduced landfill costs. A study conducted by the Technical University of Denmark according to the Economist found that in 83 percent of cases, recycling is the most efficient method to dispose of household waste. However, a 2004 assessment by the Danish Environmental Assessment Institute concluded that incineration was the most effective method for disposing of drink containers, even aluminium ones.
Fiscal efficiency is separate from economic efficiency. Economic analysis of recycling does not include what economists call externalities: unpriced costs and benefits that accrue to individuals outside of private transactions. Examples include less air pollution and greenhouse gases from incineration and less waste leaching from landfills. Without mechanisms such as taxes or subsidies, businesses and consumers following their private benefit would ignore externalities despite the costs imposed on society. If landfills and incinerator pollution is inadequately regulated, these methods of waste disposal appear cheaper than they really are, because part of their cost is the pollution imposed on people nearby. Thus, advocates have pushed for legislation to increase demand for recycled materials. The United States Environmental Protection Agency (EPA) has concluded in favor of recycling, saying that recycling efforts reduced the country's carbon emissions by a net 49 million metric tonnes in 2005. In the United Kingdom, the Waste and Resources Action Programme stated that Great Britain's recycling efforts reduce CO2 emissions by 10–15 million tonnes a year. The question for economic efficiency is whether this reduction is worth the extra cost of recycling and thus makes the artificial demand creates by legislation worthwhile.
Certain requirements must be met for recycling to be economically feasible and environmentally effective. These include an adequate source of recyclates, a system to extract those recyclates from the waste stream, a nearby factory capable of reprocessing the recyclates, and a potential demand for the recycled products. These last two requirements are often overlooked—without both an industrial market for production using the collected materials and a consumer market for the manufactured goods, recycling is incomplete and in fact only "collection".
Free-market economist Julian Simon remarked "There are three ways society can organize waste disposal: (a) commanding, (b) guiding by tax and subsidy, and (c) leaving it to the individual and the market". These principles appear to divide economic thinkers today.
Frank Ackerman favours a high level of government intervention to provide recycling services. He believes that recycling's benefit cannot be effectively quantified by traditional laissez-faire economics. Allen Hershkowitz supports intervention, saying that it is a public service equal to education and policing. He argues that manufacturers should shoulder more of the burden of waste disposal.
Paul Calcott and Margaret Walls advocate the second option. A deposit refund scheme and a small refuse charge would encourage recycling but not at the expense of illegal dumping. Thomas C. Kinnaman concludes that a landfill tax would force consumers, companies and councils to recycle more.
Most free-market thinkers detest subsidy and intervention, arguing that they waste resources. The general argument is that if cities charge the full cost of garbage collection, private companies can profitably recycle any materials for which the benefit of recycling exceeds the cost (e.g. aluminum) and do not recycle other materials for which the benefit is less than the cost (e.g. glass). Cities, on the other hand, often recycle even when they not only do not receive enough for the paper or plastic to pay for its collection, but must actually pay private recycling companies to take it off of their hands. Terry Anderson and Donald Leal think that all recycling programmes should be privately operated, and therefore would only operate if the money saved by recycling exceeds its costs. Daniel K. Benjamin argues that it wastes people's resources and lowers the wealth of a population. He notes that recycling can cost a city more than twice as much as landfills, that in the United States landfills are so heavily regulated that their pollution effects are negligible, and that the recycling process also generates pollution and uses energy, which may or may not be less than from virgin production.
Trade in recyclates
Certain countries trade in unprocessed recyclates. Some have complained that the ultimate fate of recyclates sold to another country is unknown and they may end up in landfills instead of being reprocessed. According to one report, in America, 50–80 percent of computers destined for recycling are actually not recycled. There are reports of illegal-waste imports to China being dismantled and recycled solely for monetary gain, without consideration for workers' health or environmental damage. Although the Chinese government has banned these practices, it has not been able to eradicate them. In 2008, the prices of recyclable waste plummeted before rebounding in 2009. Cardboard averaged about £53/tonne from 2004 to 2008, dropped to £19/tonne, and then went up to £59/tonne in May 2009. PET plastic averaged about £156/tonne, dropped to £75/tonne and then moved up to £195/tonne in May 2009.
Certain regions have difficulty using or exporting as much of a material as they recycle. This problem is most prevalent with glass: both Britain and the U.S. import large quantities of wine bottled in green glass. Though much of this glass is sent to be recycled, outside the American Midwest there is not enough wine production to use all of the reprocessed material. The extra must be downcycled into building materials or re-inserted into the regular waste stream.
Similarly, the northwestern United States has difficulty finding markets for recycled newspaper, given the large number of pulp mills in the region as well as the proximity to Asian markets. In other areas of the U.S., however, demand for used newsprint has seen wide fluctuation.
In some U.S. states, a program called RecycleBank pays people to recycle, receiving money from local municipalities for the reduction in landfill space that must be purchased. It uses a single stream process in which all material is automatically sorted.
Criticisms and responses
Critics dispute the net economic and environmental benefits of recycling over its costs, and suggest that proponents of recycling often make matters worse and suffer from confirmation bias. Specifically, critics argue that the costs and energy used in collection and transportation detract from (and outweigh) the costs and energy saved in the production process; also that the jobs produced by the recycling industry can be a poor trade for the jobs lost in logging, mining, and other industries associated with production; and that materials such as paper pulp can only be recycled a few times before material degradation prevents further recycling.
Journalist John Tierney notes that it is generally more expensive for municipalities to recycle waste from households than to send it to a landfill and that "recycling may be the most wasteful activity in modern America."
Much of the difficulty inherent in recycling comes from the fact that most products are not designed with recycling in mind. The concept of sustainable design aims to solve this problem, and was laid out in the 2002 book Cradle to Cradle: Remaking the Way We Make Things by architect William McDonough and chemist Michael Braungart. They suggest that every product (and all packaging it requires) should have a complete "closed-loop" cycle mapped out for each component—a way in which every component either returns to the natural ecosystem through biodegradation or is recycled indefinitely.
While recycling diverts waste from entering directly into landfill sites, current recycling misses the dispersive components. Critics believe that complete recycling is impracticable as highly dispersed wastes become so diluted that the energy needed for their recovery becomes increasingly excessive.
As with environmental economics, care must be taken to ensure a complete view of the costs and benefits involved. For example, paperboard packaging for food products is more easily recycled than most plastic, but is heavier to ship and may result in more waste from spoilage.
Energy and material flows
The amount of energy saved through recycling depends upon the material being recycled and the type of energy accounting that is used. Correct accounting for this saved energy can be accomplished with life-cycle analysis using real energy values, and in addition, exergy, which is a measure of how much useful energy can be used. In general, it takes far less energy to produce a unit mass of recycled materials than it does to make the same mass of virgin materials.
Some scholars use emergy (spelled with an m) analysis, for example, budgets for the amount of energy of one kind (exergy) that is required to make or transform things into another kind of product or service. Emergy calculations take into account economics that can alter pure physics-based results. Using emergy life-cycle analysis researchers have concluded that materials with large refining costs have the greatest potential for high recycle benefits. Moreover, the highest emergy efficiency accrues from systems geared toward material recycling, where materials are engineered to recycle back into their original form and purpose, followed by adaptive reuse systems where the materials are recycled into a different kind of product, and then by-product reuse systems where parts of the products are used to make an entirely different product.
The Energy Information Administration (EIA) states on its website that "a paper mill uses 40 percent less energy to make paper from recycled paper than it does to make paper from fresh lumber." Some critics argue that it takes more energy to produce recycled products than it does to dispose of them in traditional landfill methods, since the curbside collection of recyclables often requires a second waste truck. However, recycling proponents point out that a second timber or logging truck is eliminated when paper is collected for recycling, so the net energy consumption is the same. An emergy life-cycle analysis on recycling revealed that fly ash, aluminum, recycled concrete aggregate, recycled plastic, and steel yield higher efficiency ratios, whereas the recycling of lumber generates the lowest recycle benefit ratio. Hence, the specific nature of the recycling process, the methods used to analyse the process, and the products involved affect the energy savings budgets.
It is difficult to determine the amount of energy consumed or produced in waste disposal processes in broader ecological terms, where causal relations dissipate into complex networks of material and energy flow.
How much energy is used in recycling also depends on the type of material being recycled and the process used to do so. Aluminium is generally agreed to use far less energy when recycled rather than being produced from scratch. The EPA states that "recycling aluminum cans, for example, saves 95 percent of the energy required to make the same amount of aluminum from its virgin source, bauxite." In 2009, more than half of all aluminium cans produced came from recycled aluminium. Similarly, it has been estimated that new steel produced with recycled cans reduces greenhouse gas emissions by 75%.
Economist Steven Landsburg has suggested that the sole benefit of reducing landfill space is trumped by the energy needed and resulting pollution from the recycling process. Others, however, have calculated through life-cycle assessment that producing recycled paper uses less energy and water than harvesting, pulping, processing, and transporting virgin trees. When less recycled paper is used, additional energy is needed to create and maintain farmed forests until these forests are as self-sustainable as virgin forests.
Other studies have shown that recycling in itself is inefficient to perform the "decoupling" of economic development from the depletion of non-renewable raw materials that is necessary for sustainable development. The international transportation or recycle material flows through "... different trade networks of the three countries result in different flows, decay rates, and potential recycling returns". As global consumption of a natural resources grows, their depletion is inevitable. The best recycling can do is to delay; complete closure of material loops to achieve 100 percent recycling of nonrenewables is impossible as micro-trace materials dissipate into the environment causing severe damage to the planet's ecosystems. Historically, this was identified as the metabolic rift by Karl Marx, who identified the unequal exchange rate between energy and nutrients flowing from rural areas to feed urban cities that create effluent wastes degrading the planet's ecological capital, such as loss in soil nutrient production. Energy conservation also leads to what is known as Jevon's paradox, where improvements in energy efficiency lowers the cost of production and leads to a rebound effect where rates of consumption and economic growth increases.
Costs
The amount of money actually saved through recycling depends on the efficiency of the recycling program used to do it. The Institute for Local Self-Reliance argues that the cost of recycling depends on various factors, such as landfill fees and the amount of disposal that the community recycles. It states that communities begin to save money when they treat recycling as a replacement for their traditional waste system rather than an add-on to it and by "redesigning their collection schedules and/or trucks".
In some cases, the cost of recyclable materials also exceeds the cost of raw materials. Virgin plastic resin costs 40 percent less than recycled resin. Additionally, a United States Environmental Protection Agency (EPA) study that tracked the price of clear glass from 15 July to 2 August 1991, found that the average cost per ton ranged from $40 to $60 while a USGS report shows that the cost per ton of raw silica sand from years 1993 to 1997 fell between $17.33 and $18.10.
Comparing the market cost of recyclable material with the cost of new raw materials ignores economic externalities—the costs that are currently not counted by the market. Creating a new piece of plastic, for instance, may cause more pollution and be less sustainable than recycling a similar piece of plastic, but these factors are not counted in market cost. A life cycle assessment can be used to determine the levels of externalities and decide whether the recycling may be worthwhile despite unfavorable market costs. Alternatively, legal means (such as a carbon tax) can be used to bring externalities into the market, so that the market cost of the material becomes close to the true cost.
Working conditions
The recycling of waste electrical and electronic equipment can create a significant amount of pollution. This problem is specifically occurrent in India and China. Informal recycling in an underground economy of these countries has generated an environmental and health disaster. High levels of lead (Pb), polybrominated diphenylethers (PBDEs), polychlorinated dioxins and furans, as well as polybrominated dioxins and furans (PCDD/Fs and PBDD/Fs), concentrated in the air, bottom ash, dust, soil, water, and sediments in areas surrounding recycling sites. These materials can make work sites harmful to the workers themselves and the surrounding environment.
Possible income loss and social costs
In some countries, recycling is performed by the entrepreneurial poor such as the karung guni, zabbaleen, the rag-and-bone man, waste picker, and junk man. With the creation of large recycling organizations that may be profitable, either by law or economies of scale, the poor are more likely to be driven out of the recycling and the remanufacturing job market. To compensate for this loss of income, a society may need to create additional forms of societal programs to help support the poor. Like the parable of the broken window, there is a net loss to the poor and possibly the whole of a society to make recycling artificially profitable, e.g. through the law. However, in Brazil and Argentina, waste pickers/informal recyclers work alongside the authorities, in fully or semi-funded cooperatives, allowing informal recycling to be legitimized as a paid public sector job.
Because the social support of a country is likely to be less than the loss of income to the poor undertaking recycling, there is a greater chance for the poor to come in conflict with the large recycling organizations. This means fewer people can decide if certain waste is more economically reusable in its current form rather than being reprocessed. Contrasted to the recycling poor, the efficiency of their recycling may actually be higher for some materials because individuals have greater control over what is considered "waste".
One labor-intensive underused waste is electronic and computer waste. Because this waste may still be functional and wanted mostly by those on lower incomes, who may sell or use it at a greater efficiency than large recyclers.
Some recycling advocates believe that laissez-faire individual-based recycling does not cover all of society's recycling needs. Thus, it does not negate the need for an organized recycling program. Local government can consider the activities of the recycling poor as contributing to the ruining of property.
Public participation rates
Changes that have been demonstrated to increase recycling rates include:
Single-stream recycling
Pay as you throw fees for trash
In a study done by social psychologist Shawn Burn, it was found that personal contact with individuals within a neighborhood is the most effective way to increase recycling within a community. In her study, she had 10 block leaders talk to their neighbors and persuade them to recycle. A comparison group was sent fliers promoting recycling. It was found that the neighbors that were personally contacted by their block leaders recycled much more than the group without personal contact. As a result of this study, Shawn Burn believes that personal contact within a small group of people is an important factor in encouraging recycling. Another study done by Stuart Oskamp examines the effect of neighbors and friends on recycling. It was found in his studies that people who had friends and neighbors that recycled were much more likely to also recycle than those who did not have friends and neighbors that recycled.
Many schools have created recycling awareness clubs in order to give young students an insight on recycling. These schools believe that the clubs actually encourage students to not only recycle at school but at home as well.
Recycling of metals varies extremely by type. Titanium and lead have an extremely high recycling rates of over 90%. Copper and cobalt have high rates of recycling around 75%. Only about half of aluminum is recycled. Most of the remaining metals have recycling rates of below 35%, while 34 types of metals have recycling rates of under 1%.
"Between 1960 and 2000, the world production of plastic resins increased 25 times its original amount, while recovery of the material remained below 5 percent." Many studies have addressed recycling behaviour and strategies to encourage community involvement in recycling programs. It has been argued that recycling behavior is not natural because it requires a focus and appreciation for long-term planning, whereas humans have evolved to be sensitive to short-term survival goals; and that to overcome this innate predisposition, the best solution would be to use social pressure to compel participation in recycling programs. However, recent studies have concluded that social pressure does not work in this context. One reason for this is that social pressure functions well in small group sizes of 50 to 150 individuals (common to nomadic hunter–gatherer peoples) but not in communities numbering in the millions, as we see today. Another reason is that individual recycling does not take place in the public view.
Following the increasing popularity of recycling collection being sent to the same landfills as trash, some people kept on putting recyclables on the recyclables bin.
Recycling in art
Art objects are more and more often made from recycled material.
Embracing a circular economy through advanced sorting technologies
By extending the lifespan of goods, parts, and materials, a circular economy seeks to minimize waste and maximize resource utilization. Advanced sorting techniques like optical and robotic sorting may separate and recover valuable materials from waste streams, lowering the requirement for virgin resources and accelerating the shift to a circular economy.
Community engagement, such as education and awareness campaigns, may support the acceptance of recycling and reuse programs and encourage the usage of sustainable practices. One can lessen our influence on the environment, save natural resources, and generate economic possibilities by adopting a circular economy using cutting-edge sorting technology and community engagement. According to Melati et al., to successfully transition to a circular economy, legislative and regulatory frameworks must encourage sustainable practices while addressing possible obstacles and difficulties in putting these ideas into action.
| Technology | Basics_6 | null |
70184 | https://en.wikipedia.org/wiki/Duodenum | Duodenum | The duodenum is the first section of the small intestine in most higher vertebrates, including mammals, reptiles, and birds. In mammals, it may be the principal site for iron absorption.
The duodenum precedes the jejunum and ileum and is the shortest part of the small intestine.
In human beings, the duodenum is a hollow jointed tube about long connecting the stomach to the middle part of the small intestine. It begins with the duodenal bulb and ends at the suspensory muscle of duodenum. The duodenum can be divided into four parts: the first (superior), the second (descending), the third (transverse) and the fourth (ascending) parts.
Overview
The duodenum is the first section of the small intestine in most higher vertebrates, including mammals, reptiles, and birds. In fish, the divisions of the small intestine are not as clear, and the terms anterior intestine or proximal intestine may be used instead of duodenum. In mammals the duodenum may be the principal site for iron absorption.
In humans, the duodenum is a C-shaped hollow jointed tube, in length, lying adjacent to the stomach (and connecting it to the small intestine). It is divided anatomically into four sections. The first part lies within the peritoneum but its other parts are retroperitoneal.
Parts
The first part, or superior part, of the duodenum is a continuation from the pylorus to the transpyloric plane. It is superior (above) to the rest of the segments, at the vertebral level of L1. The duodenal bulb, about long, is the first part of the duodenum and is slightly dilated. The duodenal bulb is a remnant of the mesoduodenum, a mesentery that suspends the organ from the posterior abdominal wall in fetal life. The first part of the duodenum is mobile, and connected to the liver by the hepatoduodenal ligament of the lesser omentum. The first part of the duodenum ends at the corner, the superior duodenal flexure.
Relations:
Anterior
Gallbladder
Quadrate lobe of liver
Posterior
Bile duct
Gastroduodenal artery
Portal vein
Inferior vena cava
Head of pancreas
Superior
Neck of gallbladder
Hepatoduodenal ligament (lesser omentum)
Inferior
Neck of pancreas
Greater omentum
Head of pancreas
The second part, or descending part, of the duodenum begins at the superior duodenal flexure. It goes inferior to the lower border of vertebral body L3, before making a sharp turn medially into the inferior duodenal flexure, the end of the descending part.
The pancreatic duct and common bile duct enter the descending duodenum, through the major duodenal papilla. The second part of the duodenum also contains the minor duodenal papilla, the entrance for the accessory pancreatic duct. The junction between the embryological foregut and midgut lies just below the major duodenal papilla.
The third part, or horizontal part or inferior part of the duodenum is 10~12 cm in length. It begins at the inferior duodenal flexure and passes transversely to the left, passing in front of the inferior vena cava, abdominal aorta and the vertebral column. The superior mesenteric artery and vein are anterior to the third part of the duodenum. This part may be compressed between the aorta and SMA causing superior mesenteric artery syndrome.
The fourth part, or ascending part, of the duodenum passes upward, joining with the jejunum at the duodenojejunal flexure. The fourth part of the duodenum is at the vertebral level L3, and may pass directly on top, or slightly to the left, of the aorta.
Blood supply
The duodenum receives arterial blood from two different sources. The transition between these sources is important as it demarcates the foregut from the midgut. Proximal to the 2nd part of the duodenum (approximately at the major duodenal papilla – where the bile duct enters) the arterial supply is from the gastroduodenal artery and its branch the superior pancreaticoduodenal artery. Distal to this point (the midgut) the arterial supply is from the superior mesenteric artery (SMA), and its branch the inferior pancreaticoduodenal artery supplies the 3rd and 4th sections.
The superior and inferior pancreaticoduodenal arteries (from the gastroduodenal artery and SMA respectively) form an anastomotic loop between the celiac trunk and the SMA; so there is potential for collateral circulation here.
The venous drainage of the duodenum follows the arteries. Ultimately these veins drain into the portal system, either directly or indirectly through the splenic or superior mesenteric vein and then to the portal vein.
Lymphatic drainage
The lymphatic vessels follow the arteries in a retrograde fashion. The anterior lymphatic vessels drain into the pancreatoduodenal lymph nodes located along the superior and inferior pancreatoduodenal arteries and then into the pyloric lymph nodes (along the gastroduodenal artery). The posterior lymphatic vessels pass posterior to the head of the pancreas and drain into the superior mesenteric lymph nodes. Efferent lymphatic vessels from the duodenal lymph nodes ultimately pass into the celiac lymph nodes.
Histology
Under microscopy, the duodenum has a villous mucosa. This is distinct from the mucosa of the pylorus, which directly joins the duodenum. Like other structures of the gastrointestinal tract, the duodenum has a mucosa, submucosa, muscularis externa, and adventitia. Glands line the duodenum, known as Brunner's glands, which secrete mucus and bicarbonate in order to neutralise stomach acids. These are distinct glands not found in the ileum or jejunum, the other parts of the small intestine.
Variation
The duodenum's close anatomical association with the pancreas creates differences in function based on the position and orientation of the organs. The congenital abnormality, annular pancreas, causes a portion of the pancreas to encircle the duodenum. In an extramural annular pancreas, the pancreatic duct encircles the duodenum which results in gastrointestinal obstruction. An intramural annular pancreas is characterized by pancreatic tissue that is fused with the duodenal wall, causing duodenal ulceration.
Gene and protein expression
About 20,000 protein coding genes are expressed in human cells and 70% of these genes are expressed in the normal duodenum. Some 300 of these genes are more specifically expressed in the duodenum with very few genes expressed only in the duodenum. The corresponding specific proteins are expressed in the duodenal mucosa, and many of these are also expressed in the small intestine, such as alanine aminopeptidase, a digestive enzyme, angiotensin-converting enzyme, involved in controlling blood pressure, and RBP2, a protein involved in the uptake of vitamin A.
Function
The duodenum is largely responsible for the breakdown of food in the small intestine, using enzymes. The duodenum also regulates the rate of emptying of the stomach via hormonal pathways. Secretin and cholecystokinin are released from cells in the duodenal epithelium in response to acidic and fatty stimuli present there when the pylorus opens and emits gastric chyme into the duodenum for further digestion. These cause the liver and gallbladder to release bile, and the pancreas to release bicarbonate and digestive enzymes such as trypsin, lipase and amylase into the duodenum as they are needed.
The duodenum is a critical contributor to the regulation of food intake and glycemic control. As the first part of the small intestine, the duodenum is the initial site of nutrient absorption in the gastrointestinal tract. The duodenum senses nutrient intake and composition, and signals to the liver, pancreas, adipose tissue and brain through the direct and indirect release of several key hormones and signaling molecules, including the incretin peptides Glucose-dependent insulinotropic polypeptide (GIP) and Glucagon-like peptide-1 (GLP-1), as well as Cholecystokinin (CCK) and Secretin. The duodenum also signals to the brain directly via vagal afferents enabling neural control over food intake and glycemia. Intestinal secretion of GIP and GLP-1 stimulates glucose-dependent insulin secretion from pancreatic beta-cells, known as the incretin effect. Incretin peptides, principally GLP-1 and GIP, regulate islet hormone secretion, glucose concentrations, lipid metabolism, gut motility, appetite and body weight, and immune function.
The villi of the duodenum have a leafy-looking appearance, which is a histologically identifiable structure. Brunner's glands, which secrete mucus, are only found in the duodenum. The duodenum wall consists of a very thin layer of cells that form the muscularis mucosae.
Clinical significance
Ulceration
Ulcers of the duodenum commonly occur because of infection by the bacteria Helicobacter pylori. These bacteria, through a number of mechanisms, erode the protective mucosa of the duodenum, predisposing it to damage from gastric acids. The first part of the duodenum is the most common location of ulcers since it is where the acidic chyme meets the duodenal mucosa before mixing with the alkaline secretions of the duodenum. Duodenal ulcers may cause recurrent abdominal pain and dyspepsia, and are often investigated using a urea breath test to test for the bacteria, and endoscopy to confirm ulceration and take a biopsy. If managed, these are often managed through antibiotics that aim to eradicate the bacteria, and proton-pump inhibitors and antacids to reduce the gastric acidity.
Celiac disease
The British Society of Gastroenterology guidelines specify that a duodenal biopsy is required for the diagnosis of adult celiac disease. The biopsy is ideally performed at a moment when the patient is on a gluten-containing diet.
Cancer
Duodenal cancer is a cancer in the first section of the small intestine. Cancer of the duodenum is relatively rare compared to stomach cancer and colorectal cancer; malignant tumors in the duodenum constitute only around 0.3% of all the gastrointestinal tract tumors but around half of cancerous tissues that develop in the small intestine. Its histology is often observed to be adenocarcinoma, meaning that the cancerous tissue arises from glandular cells in the epithelial tissue lining the duodenum.
Obesity and Diabetes
A western diet induces duodenal mucosal hyperplasia and dysfunction that underlie insulin resistance, type 2 diabetes and obesity. Diet-induced duodenal mucosal hyperplasia consists of increased mucosal mass, increased villus length, decreased crypt density, proliferation of enteroendocrine cells, increased enterocyte mass, and an accumulation of lipid droplets in the mucosa. Diet induced duodenal dysfunction includes increased duodenal nutrient absorption, altered duodenal hormone secretion, and altered intestinal vagal afferent neuronal function.
Inflammation
Inflammation of the duodenum is referred to as duodenitis. There are multiple known causes. Celiac disease and inflammatory bowel disease are two of the known causes.
Etymology
The name duodenum is Medieval Latin, short for intestīnum duodēnum digitōrum, meaning intestine of twelve finger-widths (in length), genitive pl. of duodēnī, twelve each, from Latin duodeni "twelve each" (from duodecim "twelve"). Coined by Gerard of Cremona (d. 1187) in his Latin translation of "Canon Avicennae," "اثنا عشر" itself a loan-translation of Greek dodekadaktylon, literally "twelve fingers long." The intestine part was so called by the Greek physician Herophilus (c. 335–280 BCE) for its length, about equal to the breadth of 12 fingers.
Many languages retain a similar etymology for this word. For example, German , Dutch and Turkish .
Additional images
| Biology and health sciences | Gastrointestinal tract | Biology |
70289 | https://en.wikipedia.org/wiki/Trachea | Trachea | The trachea (: tracheae or tracheas), also known as the windpipe, is a cartilaginous tube that connects the larynx to the bronchi of the lungs, allowing the passage of air, and so is present in almost all animals lungs. The trachea extends from the larynx and branches into the two primary bronchi. At the top of the trachea, the cricoid cartilage attaches it to the larynx. The trachea is formed by a number of horseshoe-shaped rings, joined together vertically by overlying ligaments, and by the trachealis muscle at their ends. The epiglottis closes the opening to the larynx during swallowing.
The trachea begins to form in the second month of embryo development, becoming longer and more fixed in its position over time. Its epithelium is lined with column-shaped cells that have hair-like extensions called cilia, with scattered goblet cells that produce protective mucins. The trachea can be affected by inflammation or infection, usually as a result of a viral illness affecting other parts of the respiratory tract, such as the larynx and bronchi, called croup, that can result in a cough. Infection with bacteria usually affects the trachea only and can cause narrowing or even obstruction. As a major part of the respiratory tract, when obstructed the trachea prevents air entering the lungs and so a tracheostomy may be required if the trachea is obstructed. Additionally, during surgery if mechanical ventilation is required when a person is sedated, a tube is inserted into the trachea, called tracheal intubation.
The word trachea is used to define a very different organ in invertebrates than in vertebrates. Insects have an open respiratory system made up of spiracles, tracheae, and tracheoles to transport metabolic gases to and from tissues.
Structure
An adult's trachea has an inner diameter of about and a length of about , wider in males than females. The trachea begins at the lower edge of the cricoid cartilage of the larynx at the level of sixth cervical vertebra (C6) and ends at the carina, the point where the trachea branches into left and right main bronchi., at the level of the fourth thoracic vertebra (T4), although its position may change with breathing. The trachea is surrounded by 16–20 rings of hyaline cartilage; these 'rings' are 4 millimetres high in the adult, incomplete and C-shaped. Ligaments connect the rings. The trachealis muscle connects the ends of the incomplete rings and runs along the back wall of the trachea. Also adventitia, which is the outermost layer of connective tissue that surrounds the hyaline cartilage, contributes to the trachea's ability to bend and stretch with movement.
Although trachea is a midline structure, it can be displaced normally to the right by the aortic arch.
Nearby structures
The trachea passes by many structures of the neck and chest (thorax) along its course.
In front of the upper trachea lies connective tissue and skin. Several other structures pass over or sit on the trachea; the jugular arch, which joins the two anterior jugular veins, sits in front of the upper part of the trachea. The sternohyoid and sternothyroid muscles stretch along its length. The thyroid gland also stretches across the upper trachea, with the isthmus overlying the second to fourth rings, and the lobes stretching to the level of the fifth or sixth cartilage. The blood vessels of the thyroid rest on the trachea next to the isthmus; superior thyroid arteries join just above it, and the inferior thyroid veins below it. In front of the lower trachea lies the manubrium of the sternum, the remnants of the thymus in adults. To the front left lie the large blood vessels the aortic arch and its branches the left common carotid artery and the brachiocephalic trunk; and the left brachiocephalic vein. The deep cardiac plexus and lymph nodes are also positioned in front of the lower trachea.
Behind the trachea, along its length, sits the oesophagus, followed by connective tissue and the vertebral column. To its sides run the carotid arteries and inferior thyroid arteries; and to its sides on its back surface run the recurrent laryngeal nerves in the upper trachea, and the vagus nerves in the lower trachea.
The trachealis muscle contracts during coughing, reducing the size of the lumen of the trachea.
Blood and lymphatic supply
The upper part of trachea receives and drains blood through the inferior thyroid arteries and veins; the lower trachea receives blood from bronchial arteries. Arteries that supply the trachea do so via small branches that supply the trachea from the sides. As the branches approach the wall of the trachea, they split into inferior and superior branches, which join with the branches of the arteries above and below; these then split into branches that supply the anterior and posterior parts of the trachea. The inferior thyroid arteries arise just below the isthmus of the thyroid, which sits atop the trachea. These arteries join () with ascending branches of the bronchial arteries, which are direct branches from the aorta, to supply blood to the trachea. The lymphatic vessels of the trachea drain into the pretracheal nodes that lie in front of the trachea, and paratracheal lymph nodes that lie beside it.
Development
In the fourth week of development of the human embryo as the respiratory bud grows, the trachea separates from the foregut through the formation of ridges which eventually separate the trachea from the oesophagus, the tracheoesophageal septum. This separates the future trachea from the oesophagus and divides the foregut tube into the laryngotracheal tube. By the start of the fifth week, the left and right main bronchi have begun to form, initially as buds at the terminal end of the trachea.
The trachea is no more than 4 mm in diameter during the first year of life, expanding to its adult diameter of approximately 2 cm by late childhood. The trachea is more circular and more vertical in children compared to adults, varies more in size, and also varies more in its position in relation to its surrounding structures.
Microanatomy
The trachea is lined with a layer of interspersed layers of column-shaped cells with cilia. The epithelium contains goblet cells, which are glandular, column-shaped cells that produce mucins, the main component of mucus. Mucus helps to moisten and protect the airways. Mucus lines the ciliated cells of the trachea to trap inhaled foreign particles that the cilia then waft upward toward the larynx and then the pharynx where it can be either swallowed into the stomach or expelled as phlegm. This self-clearing mechanism is termed mucociliary clearance. Directly beneath this mucus layer lies the submucosa layer which is composed primarily of fibrous connective tissue and connects the mucosa to the rings of hyaline cartilage beneath.
The trachea is surrounded by 16 to 20 rings of hyaline cartilage; these 'rings' are incomplete and C-shaped. Two or more of the cartilages often unite, partially or completely, and they are sometimes bifurcated at their extremities. The rings are generally highly elastic but they may calcify with age.
Function
The trachea's main function is to transport air to and from the lungs. It also helps to warm, humidify, and filter the air before it reaches the lungs.
The trachea is made up of rings of cartilage, which help to keep it open and prevent it from collapsing. The inside of the trachea is lined with a mucous membrane, which produces mucus to help trap dirt and dust particles. The cilia, which are tiny hairs that line the mucous membrane, help to move the mucus and trapped particles up and out of the trachea.
Clinical significance
Inflammation and infection
Inflammation of the trachea is known as tracheitis, usually due to an infection. It is usually caused by viral infections, with bacterial infections occurring almost entirely in children. Most commonly, infections occur with inflammation of other parts of the respiratory tract, such as the larynx and bronchi, known as croup, however bacterial infections may also affect the trachea alone, although they are often associated with a recent viral infection. Viruses that cause croup are generally the parainfluenza viruses 1–3, with influenza viruses A and B also causing croup, but usually causing more serious infections; bacteria may also cause croup and include Staphylococcus aureus, Haemophilus influenzae, Streptococcus pneumoniae and Moraxella catarrhalis. Causes of bacterial infection of the trachea are most commonly Staphylococcus aureus and Streptococcus pneumoniae. In patients who are in hospital, additional bacteria that may cause tracheitis include Escherichia coli, Klebsiella pneumoniae, and Pseudomonas aeruginosa.
A person affected with tracheitis may start with symptoms that suggest an upper respiratory tract infection such as a cough, sore throat, or coryzal symptoms such as a runny nose. Fevers may develop and an affected child may develop difficulty breathing and sepsis. Swelling of the airway can cause narrowing of the airway, causing a hoarse breathing sound called stridor, or even cause complete blockage. Up to 80% of people affected by bacterial tracheitis require the use of mechanical ventilation, and treatment may include endoscopy for the purposes of acquiring microbiological specimens for culture and sensitivity, as well as removal of any dead tissue associated with the infection. Treatment in such situations usually includes antibiotics.
Narrowing
A trachea may be narrowed or compressed, usually a result of enlarged nearby lymph nodes; cancers of the trachea or nearby structures; large thyroid goitres; or rarely as a result of other processes such as unusually swollen blood vessels. Scarring from tracheobronchial injury or intubation; or inflammation associated with granulomatosis with polyangiitis may also cause a narrowing of the trachea (tracheal stenosis). Obstruction invariably causes a harsh breathing sound known as stridor. A camera inserted via the mouth down into the trachea, called bronchoscopy, may be performed to investigate the cause of an obstruction. Management of obstructions depends on the cause. Obstructions as a result of malignancy may be managed with surgery, chemotherapy or radiotherapy. A stent may be inserted over the obstruction. Benign lesions, such as narrowing resulting from scarring, are likely to be surgically excised.
One cause of narrowing is tracheomalacia, which is the tendency for the trachea to collapse when there is increased external pressure, such as when airflow is increased during breathing in or out, due to decreased compliance. It can be due to congenital causes, or due to things that develop after birth, such as compression from nearby masses or swelling, or trauma. Congenital tracheomalacia can occur by itself or in association with other abnormalities such as bronchomalacia or laryngomalacia, and abnormal connections between the trachea and the oesophagus, amongst others. Congenital tracheomalacia often improves without specific intervention; when required, interventions may include beta agonists and muscarinic agonists, which enhance the tone of the smooth muscle surrounding the trachea; positive pressure ventilation, or surgery, which may include the placement of a stent, or the removal of the affected part of the trachea. In dogs, particularly miniature dogs and toy dogs, tracheomalacia, as well as bronchomalacia, can lead to tracheal collapse, which often presents with a honking goose-like cough.
Injury
The trachea may be injured by trauma such as in a vehicle accident, or intentionally by another wilfully inflicting damage for example as practiced in some martial arts.
Intubation
Tracheal intubation refers to the insertion of a tube down the trachea. This procedure is commonly performed during surgery, in order to ensure a person receives enough oxygen when sedated. The catheter is connected to a machine that monitors the airflow, oxygenation and several other metrics. This is often one of the responsibilities of an anaesthetist during surgery.
In an emergency, or when tracheal intubation is deemed impossible, a tracheotomy is often performed to insert a tube for ventilation, usually when needed for particular types of surgery to be carried out so that the airway can be kept open. The provision of the opening via a tracheotomy is called a tracheostomy. Another method procedure can be carried, in an emergency situation, and this is a cricothyrotomy.
Congenital disorders
Tracheal agenesis is a rare birth defect in which the trachea fails to develop. The defect is usually fatal though sometimes surgical intervention has been successful.
A tracheoesophageal fistula is a congenital defect in which the trachea and esophagus are abnormally connected (a ). This is because of abnormalities in the separation between the trachea and oesophagus during development. This occurs in approximately 1 in 3,000 births, and the most common abnormalities is a separation of the upper and lower ends of the oesophagus, with the upper end finishing in a closed pouch. Other abnormalities may be associated with this, including cardiac abnormalities, or VACTERL syndrome. Such fistulas may be detected before a baby is born because of excess amniotic fluid; after birth, they are often associated with pneumonitis and pneumonia because of of food contents. Congenital fistulas are often treated by surgical repair. In adults, fistulas may occur because of erosion into the trachea from nearby malignant tumours, which erode into both the trachea and the oesophagus. Initially, these often result in coughing from swallowed contents of the oesophagus that are aspirated through the trachea, often progressing to fatal pneumonia; there is rarely a curative treatment. A tracheo-oesophageal puncture is a surgically created hole between the trachea and the esophagus in a person who has had their larynx removed. Air travels upwards from the surgical connection to the upper oesophagus and the pharynx, creating vibrations that create sound that can be used for speech. The purpose of the puncture is to restore a person's ability to speak after the vocal cords have been removed.
Sometimes as an anatomical variation one or more of the tracheal rings are formed as complete rings, rather than horseshoe shaped rings. These O rings are smaller than the normal C-shaped rings and can cause narrowing () of the trachea, resulting in breathing difficulties. An operation called a slide tracheoplasty can open up the rings and rejoin them as wider rings, shortening the length of the trachea. Slide tracheoplasty is said to be the best option in treating tracheal stenosis.
Mounier-Kuhn syndrome is a rare congenital disorder of an abnormally enlarged trachea, characterised by absent elastic fibres, smooth muscle thinning, and a tendency to get recurrent respiratory tract infections.
Replacement
From 2008, operations have experimentally replaced tracheas, with those grown from stem cells, or with synthetic substitutes, however this is regarded as experimental and there is no standardised method. Difficulties with ensuring adequate blood supply to the replaced trachea is considered a major challenge to any replacement. Additionally, no evidence has been found to support the placement of stem cells taken from bone marrow on the trachea as a way of stimulating tissue regeneration, and such a method remains hypothetical.
In January 2021, surgeons at Mount Sinai Hospital in New York performed the first complete trachea transplantation. The 18-hour procedure included harvesting a trachea from a donor and implanting it in the patient, connecting numerous veins and arteries to provide sufficient blood flow to the organ.
Other animals
Allowing for variations in the length of the neck, the trachea in other mammals is, in general, similar to that in humans. Generally, it is also similar to the reptilian trachea.
Vertebrates
In birds, the trachea runs from the pharynx to the syrinx, from which the primary bronchi diverge. Swans have an unusually elongated trachea, part of which is coiled beneath the sternum; this may act as a resonator to amplify sound. In some birds, the tracheal rings are complete, and may even be ossified.
In amphibians, the trachea is normally extremely short, and leads directly into the lungs, without clear primary bronchi. A longer trachea is, however, found in some long-necked salamanders, and in caecilians. While there are irregular cartilagenous nodules on the amphibian trachea, these do not form the rings found in amniotes.
The only vertebrates to have lungs, but no trachea, are the lungfish and the Polypterus, in which the lungs arise directly from the pharynx.
Invertebrates
The word trachea is used to define a very different organ in invertebrates than in vertebrates. Insects have an open respiratory system made up of spiracles, tracheae, and tracheoles to transport metabolic gases to and from tissues. The distribution of spiracles can vary greatly among the many orders of insects, but in general each segment of the body can have only one pair of spiracles, each of which connects to an atrium and has a relatively large tracheal tube behind it. The tracheae are invaginations of the cuticular exoskeleton that branch (anastomose) throughout the body with diameters from only a few micrometres up to 0.8 mm. Diffusion of oxygen and carbon dioxide takes place across the walls of the smallest tubes, called tracheoles, which penetrate tissues and even indent individual cells. Gas may be conducted through the respiratory system by means of active ventilation or passive diffusion. Unlike vertebrates, insects do not generally carry oxygen in their hemolymph.
This is one of the factors that may limit their size.
A tracheal tube may contain ridge-like circumferential rings of taenidia in various geometries such as loops or helices. Taenidia provide strength and flexibility to the trachea. In the head, thorax, or abdomen, tracheae may also be connected to air sacs. Many insects, such as grasshoppers and bees, which actively pump the air sacs in their abdomen, are able to control the flow of air through their body. In some aquatic insects, the tracheae exchange gas through the body wall directly, in the form of a gill, or function essentially as normal, via a plastron. Note that despite being internal, the tracheae of arthropods are lined with cuticular tissue and are shed during moulting (ecdysis).
Additional images
| Biology and health sciences | Respiratory system | Biology |
70374 | https://en.wikipedia.org/wiki/%C3%98resund%20Bridge | Øresund Bridge | The Øresund or Öresund Bridge is a combined railway and motorway cable-stayed bridge across the Øresund strait between Denmark and Sweden. It is the second longest bridge in Europe with both roadway and railway combined in a single structure, running nearly from the Swedish coast to the artificial island Peberholm in the middle of the strait. The crossing is completed by the Drogden Tunnel from Peberholm to the Danish island of Amager.
The bridge connects the road and rail networks of the Scandinavian Peninsula with those of Central and Western Europe. A data cable also makes the bridge the backbone of Internet data transmission between central Europe and Sweden. The international European route E20 crosses via road, the Øresund Line via railway. The construction of the Great Belt Fixed Link (1988–1998), connecting Zealand to Funen and thence to the Jutland Peninsula, and the Øresund Bridge have connected Central and Western Europe to Sweden by road and rail.
The bridge was designed by Jørgen Nissen and Klaus Falbe Hansen from Ove Arup and Partners, and Niels Gimsing and Georg Rotne.
The justification for the additional expenditure and complexity related to digging a tunnel for part of the way, rather than raising that section of the bridge, was to avoid interfering with air traffic from the nearby Copenhagen Airport, to provide a clear channel for ships in good weather or bad, and to prevent ice floes from blocking the strait. Construction began in 1995, with the bridge opening to traffic on 1 July 2000. The bridge received the 2002 IABSE Outstanding Structure Award.
History
Ideas for a fixed link across the Øresund were advanced as early as the first decade of the 20th century. In 1910, proposals were put to the Swedish Parliament for a railway tunnel across the strait, which would have comprised two tunnelled sections linked by a surface road across the island of Saltholm. The concept of a bridge over the Øresund was first formally proposed in 1936 by a consortium of engineering firms who proposed a national motorway network for Denmark.
The idea was dropped during World War II, but picked up again thereafter and studied in significant detail in various Danish-Swedish government commissions through the 1950s and 1960s. However, disagreement existed regarding the placement and exact form of the link, with some arguing for a link at the narrowest point of the sound at Helsingør–Helsingborg, further north of Copenhagen, and some arguing for a more direct link from Copenhagen to Malmö. Additionally, some regional and local interests argued that other bridge and road projects, notably the then-unbuilt Great Belt Fixed Link, should take priority. The governments of Denmark and Sweden eventually signed an agreement to build a fixed link in 1973. It would have comprised a bridge between Malmö and Saltholm, with a tunnel linking Saltholm to Copenhagen, and would have been accompanied by a second rail tunnel across the Øresund between Helsingør and Helsingborg.
However, that project was cancelled in 1978 due to the economic situation, and growing environmental concerns. As the economic situation improved in the 1980s, interest resumed and the governments signed a new agreement in 1991.
An OMEGA centre report identified the following as primary motivations for construction of the bridge:
to improve transport links in northern Europe, from Hamburg to Oslo;
regional development around the Øresund as an answer to the intensifying globalisation process and Sweden's decision to apply for membership of the European Community;
connecting the two largest cities of the region, which were both experiencing economic difficulties;
improving communications to Kastrup airport, the main flight transport hub in the region.
A joint venture of Hochtief, Skanska, Højgaard & Schultz and Monberg & Thorsen (the same of the previous Great Belt Fixed Link), began construction of the bridge in 1995 and completed it 14 August 1999. Crown Prince Frederik of Denmark and Crown Princess Victoria of Sweden met midway across the bridge-tunnel on 14 August 1999 to celebrate its completion. The official dedication took place on 1 July 2000, with Queen Margrethe II of Denmark and King Carl XVI Gustaf of Sweden as the hostess and host of the ceremony. Because of the death of nine people, including three Danes and three Swedes, at the Roskilde Festival the evening before, the ceremony opened with a minute of silence. The bridge-tunnel opened for public traffic later that day. On 12 June 2000, two weeks before the dedication, 79,871 runners competed in Broloppet, a half marathon from Amager, Denmark, to Skåne, Sweden.
Despite two schedule setbacks – the discovery of 16 unexploded World War II bombs on the seafloor and an inadvertently skewed tunnel segment – the bridge-tunnel was finished three months ahead of schedule.
Although traffic between Denmark and Sweden increased by 61 percent in the first year after the bridge opened, traffic levels were not as high as expected, perhaps due to high tolls. However, since 2005, traffic levels have increased rapidly. This may be due to Danes buying homes in Sweden to take advantage of lower housing prices in Malmö and commuting to work in Denmark. In 2012, to cross by car cost DKK 310, SEK 375 or €43, with discounts of up to 75% available to regular users. In 2007, almost 25 million people travelled over the Øresund Bridge: 15.2 million by car and bus and 9.6 million by train. By 2009, the figure had risen to 35.6 million by car, coach or train.
Link features
Bridge
At , the bridge covers half the distance between Sweden and the Danish island of Amager, the border between the two countries being from the Swedish end. The structure has a mass of 82,000 tonnes and supports two railway tracks beneath four road lanes in a horizontal girder extending along the entire length of the bridge. On both approaches to the three cable-stayed bridge sections, the girder is supported every by concrete piers. The two pairs of free-standing cable-supporting towers are high allowing shipping of head room under the main span, but most ships' captains prefer to pass through the unobstructed Drogden Strait above the Drogden Tunnel. The cable-stayed main span is long. A girder and cable-stayed design was chosen to provide the specific rigidity necessary to carry heavy rail traffic, and also to resist large accumulations of ice.
The bridge experiences occasional brief closures during very severe weather, such as the St. Jude storm of October 2013.
Due to high longitudinal and transverse loads acting over the bridge and to accommodate movements between the superstructure and substructure, it has bearings weighing up to each, capable of bearing vertical loads up to in a longitudinal direction and up to in transverse direction. The design, manufacturing and installation of the bearings were carried out by the Swiss civil engineering firm Mageba.
Vibration issues, caused by several cables in the bridge moving under certain wind and temperature conditions, were combatted with the installation of compression spring dampers installed in pairs at the centre of the cables. Two of these dampers were equipped with laser gauges for ongoing monitoring. Testing, development and installation of these spring dampers was carried out by specialists European Springs.
Peberholm
The bridge joins Drogden tunnel on the artificial island of Peberholm (Pepper Islet). The Danes chose the name to complement the natural island of Saltholm (Salt Islet) just to the north. Peberholm is a designated nature reserve built from Swedish rock and the soil dredged up during the bridge and tunnel construction, approximately long with an average width of . It is high.
Drogden Tunnel
The connection between Peberholm and the artificial peninsula at Kastrup on Amager island, the nearest populated part of Denmark, is through the long Drogden Tunnel (Drogdentunnelen). It comprises a immersed tube plus entry tunnels at each end. The tube tunnel is made from 20 prefabricated reinforced concrete segments – the largest in the world at 55,000 tonnes each – interconnected in a trench dug in the seabed. Two tubes in the tunnel carry railway tracks, two carry roads and a small fifth tube is provided for emergencies. The tubes are arranged side by side.
Rail transport
The rail link is operated jointly by the Swedish Transport Administration (Trafikverket) and the Danish railway infrastructure manager Banedanmark. Passenger train service is commissioned by Skånetrafiken and the Danish Civil Aviation and Railway Authority (Trafikstyrelsen) under the Øresundståg brand, with Transdev and DSB being the current operators. A series of new dual-voltage trains was developed, linking the Copenhagen area with Malmö and southern Sweden as far as Gothenburg and Kalmar. SJ operates X2000 trains over the bridge, with connections to Gothenburg and Stockholm. Copenhagen Airport at Kastrup has its own railway station close to the western bridgehead. Since December 2022, trains operate typically every 15 minutes during the day, reducing to once an hour during the night in both directions. Additional Øresundstrains are operated at rush hour. Freight trains also use the crossing.
The rail section is double track and capable of speeds of up to , but slower in Denmark, especially in the tunnel section. There were challenges related to the difference in electrification and signalling between the Danish and Swedish railway networks. The solution chosen is to switch the electrical system from Swedish 15 kV, 16.7 Hz to Danish 25 kV, 50 Hz before the eastern bridgehead at Lernacken in Sweden. The line is signalled according to the standard Swedish system across the length of the bridge. On Peberholm the line switches to Danish signalling, which continues into the tunnel. There is no way of changing between a locomotive for Danish standard and one for Swedish standard. All rail vehicles using the bridge must be custom made for the standards of both countries.
Trains run on the left in Sweden, and on the right in Denmark. Initially the switch was made at Malmö Central Station, a terminus at that time. After the 2010 inauguration of the Malmö City Tunnel connection, a tunnel was built at Burlöv, north of Malmö, where the two southbound tracks cross over the northbound pair. The railway in Malmö thus uses the Danish standard.
Border checks
With both Sweden and Denmark being part of the Nordic Passport Union since the 1950s, border controls between the two countries have been abolished for decades and travellers can normally move freely across the Øresund Bridge. In 2001, both countries also joined the Schengen area, and since then the abolition of border controls is primarily regulated by European Union law, more specifically the Schengen acquis.
However, in November 2015, during the European migrant crisis, Sweden introduced temporary border controls at the border to Denmark in accordance with the provisions of the Schengen acquis on the reintroduction of temporary internal border controls. As such, travellers into Sweden from Denmark (but not travellers into Denmark from Sweden) must show a valid passport or national ID card (citizens of EU/EEA countries) or passport and entry visa (if required) for nationals of other non-EU/EEA countries. The move marked a break with 60 years of border control free travel between the Nordic countries. In January 2016, these border measures were extended by a special carriers' liability, forcing carriers (such as bus, train and ferry companies) to check the identity of all passengers from Denmark before they boarded a bus, train or ferry to Sweden. These checks were enforced by a fine of SEK 50,000 as punishment for serving those without such identity documents. This led to the enforcement of checks by private security guards at, for instance, the rail station in Kastrup airport in Denmark, an unpopular move with passengers, due to the delays imposed.
In May 2017, Sweden removed the carriers' liability, but the ordinary border controls carried out by the Swedish Police Authority remained on the Swedish side of the Øresund Bridge. In accordance with the Schengen Borders Code, these border controls are only allowed for a period of six months at a time, and therefore have to be renewed twice a year.
Costs and benefits
The cost for the Øresund Connection, including motorway and railway connections on land, was DKK 30.1 billion (~€4.0 billion) according to the 2000 year price index, with the cost of the bridge expected in 2003 to be recouped by 2037.
In 2006, Sweden began work on the Malmö City Tunnel, a SEK 9.45 billion connection with the bridge that was completed in December 2010. The connection will be entirely user-financed. The owner company is owned half by the Danish state and half by the Swedish state. This owner company has taken loans guaranteed by the governments to finance the connection and the user fees are its only income. After the increase in traffic, these fees are enough to pay the interest and begin repaying the loans, which is expected to take about 30 years.
Taxpayers have paid for neither the bridge nor the tunnel, but tax money has been used for the land connections. On the Danish side, the land connection has domestic benefits, mainly to connect the airport to the railway network. The Malmö City Tunnel has the benefit of connecting the southern part of the inner city to the rail network and allowing many more trains to and from Malmö.
According to The Öresund Committee, the bridge has made a national economic gain of DKK 57 billion, or SEK 78 billion SEK (~€8.41 billion) on both sides of the strait by increased commuting and lower commuting expense. The gain is estimated to be SEK 6.5 billion per year but this could be increased to 7.7 billion by removing the three biggest obstacles to integration and mobility, the two largest being that non-EU nationals in Sweden are not allowed to work in Denmark and that many professional qualifications and merits are not mutually recognised.
A 2021 study found that the bridge led to an increase in innovation in Malmö. The key mechanism appears to be that high-skilled workers were drawn to Malmö. A 2022 study found that the bridge caused an increase of 13.5% in the average wage of workers in the region, as the bridge expanded the size of the labor market.
Cultural references
The bridge lends its name to the Nordic noir television series The Bridge, which is set in the region around the bridge.
When Malmö hosted the Eurovision Song Contest 2013, the bridge was the inspiration for a similar element in the set design, symbolising the connection between Sweden and the rest of Europe.
The bridge was the inspiration behind the 2014 song "Walk Me to the Bridge" by Manic Street Preachers from their album Futurology.
Environmental effects
The underwater parts of the bridge have become covered in marine organisms and act as an artificial reef.
| Technology | Multi-modal crossings | null |
70423 | https://en.wikipedia.org/wiki/Esker | Esker | An esker, eskar, eschar, or os, sometimes called an asar, osar, or serpent kame, is a long, winding ridge of stratified sand and gravel, examples of which occur in glaciated and formerly glaciated regions of Europe and North America. Eskers are frequently several kilometres long and, because of their uniform shape, look like railway embankments.
Etymology
The term esker is derived from the Irish word (), which means "ridge or elevation, especially one separating two plains or depressed surfaces". The Irish word was and is used particularly to describe long sinuous ridges, which are now known to be deposits of fluvio-glacial material. The best-known example of such an eiscir is the Eiscir Riada, which runs nearly the whole width of Ireland from Dublin to Galway, a distance of , and is still closely followed by the main Dublin–Galway road
The synonym os comes from the Swedish word , "ridge".
Geology
Most eskers are argued to have formed within ice-walled tunnels by streams that flowed within and under glaciers. They tended to form around the time of the glacial maximum, when the glacier was slow and sluggish. After the retaining ice walls melted away, stream deposits remained as long winding ridges.
Eskers may also form above glaciers by accumulation of sediment in supraglacial channels, in crevasses, in linear zones between stagnant blocks, or in narrow embayments at glacier margins. Eskers form near the terminal zone of glaciers, where the ice is not moving as fast and is relatively thin.
Plastic flow and melting of the basal ice determines the size and shape of the subglacial tunnel. This in turn determines the shape, composition and structure of an esker. Eskers may exist as a single channel, or may be part of a branching system with tributary eskers. They are not often found as continuous ridges, but have gaps that separate the winding segments. The ridge crests of eskers are not usually level for very long, and are generally knobby. Eskers may be broad-crested or sharp-crested with steep sides. They can reach hundreds of kilometers in length and are generally in height.
The path of an esker is governed by its water pressure in relation to the overlying ice. Generally, the pressure of the ice was at such a point that it would allow eskers to run in the direction of glacial flow, but force them into the lowest possible points such as valleys or river beds, which may deviate from the direct path of the glacier. This process is what produces the wide eskers upon which roads and highways can be built. Less pressure, occurring in areas closer to the glacial maximum, can cause ice to melt over the stream flow and create steep-walled, sharply-arched tunnels.
The concentration of rock debris in the ice and the rate at which sediment is delivered to the tunnel by melting and from upstream transport determines the amount of sediment in an esker. The sediment generally consists of coarse-grained, water-laid sand and gravel, although gravelly loam may be found where the rock debris is rich in clay. This sediment is stratified and sorted, and usually consists of pebble/cobble-sized material with occasional boulders. Bedding may be irregular but is almost always present, and cross-bedding is common.
There are various cases where inland dunes have developed next to eskers after deglaciation. These dunes are often found in the leeward side of eskers, if the esker is not oriented parallel to prevailing winds. Examples of dunes developed on eskers can be found in both Swedish and Finnish Lapland.
Lakes may form within depressions in eskers. These lakes can lack surface outflows and inflows and have drastic fluctuations over time.
Life on eskers
Eskers are critical to the ecology of Northern Canada. Several plants that grow on eskers, including bear root and cranberries, are important food for bears and migrating waterfowl; animals from grizzly bears to tundra wolves to ground squirrels can burrow into the eskers to survive the long winters.
Examples of eskers
Europe
In Sweden, Uppsalaåsen stretches for and passes through Uppsala city. The Badelundaåsen esker runs for over from Nyköping to lake Siljan. Pispala's Pyynikki Esker in Tampere, Finland, is on an esker between two lakes carved by glaciers. A similar site is Punkaharju in Finnish Lakeland.
The village of Kemnay in Aberdeenshire, Scotland has a esker locally called the Kemb Hills. In Berwickshire in southeast Scotland is Bedshiel Kaims, a example which is up to high and is a legacy of an ice-stream within the Tweed Valley.
North America
Great Esker Park runs along the Back River in Weymouth, Massachusetts, and is home to the highest esker in North America ().
There are over 1,000 eskers in the state of Michigan, primarily in the south-central Lower Peninsula. The longest esker in Michigan is the Mason Esker, which stretches south-southeast from DeWitt through Lansing and Holt, before ending near Mason.
Esker systems in the U.S. state of Maine can be traced for up to .
Thelon Esker is almost long, straddling the boundary between the territories of Nunavut and Northwest Territories in Canada.
Uvayuq or Mount Pelly, in Ovayok Territorial Park, the Kitikmeot Region, Nunavut is an esker.
Roads are sometimes built along eskers to save expense. Examples include the Denali Highway in Alaska, the Trans-Taiga Road in Quebec, and the "Airline" segment of Maine State Route 9 between Bangor and Calais.
There are numerous long eskers in the Adirondack State Park in upstate New York. The Rainbow Lake esker bisects the eponymous lake and extends discontinuously for 85 miles (c. 137 km). Another long discontinuous esker extends from Mountain Pond through Keese Mill, passing between Upper St. Regis Lake and the Spectacle Ponds, and continuing to Ochre, Fish, and Lydia Ponds in the St. Regis Canoe Area. A 150-foot-high esker bisects the Five Ponds Wilderness Area.
| Physical sciences | Glacial landforms | Earth science |
70425 | https://en.wikipedia.org/wiki/Inflammation | Inflammation | Inflammation (from ) is part of the biological response of body tissues to harmful stimuli, such as pathogens, damaged cells, or irritants. The five cardinal signs are heat, pain, redness, swelling, and loss of function (Latin calor, dolor, rubor, tumor, and functio laesa).
Inflammation is a generic response, and therefore is considered a mechanism of innate immunity, whereas adaptive immunity is specific to each pathogen.
Inflammation is a protective response involving immune cells, blood vessels, and molecular mediators. The function of inflammation is to eliminate the initial cause of cell injury, clear out damaged cells and tissues, and initiate tissue repair. Too little inflammation could lead to progressive tissue destruction by the harmful stimulus (e.g. bacteria) and compromise the survival of the organism. However inflammation can also have negative effects. Too much inflammation, in the form of chronic inflammation, is associated with various diseases, such as hay fever, periodontal disease, atherosclerosis, and osteoarthritis.
Inflammation can be classified as acute or chronic. Acute inflammation is the initial response of the body to harmful stimuli, and is achieved by the increased movement of plasma and leukocytes (in particular granulocytes) from the blood into the injured tissues. A series of biochemical events propagates and matures the inflammatory response, involving the local vascular system, the immune system, and various cells in the injured tissue. Prolonged inflammation, known as chronic inflammation, leads to a progressive shift in the type of cells present at the site of inflammation, such as mononuclear cells, and involves simultaneous destruction and healing of the tissue.
Inflammation has also been classified as Type 1 and Type 2 based on the type of cytokines and helper T cells (Th1 and Th2) involved.
Meaning
The earliest known reference for the term inflammation is around the early 15th century. The word root comes from Old French inflammation around the 14th century, which then comes from Latin inflammatio or inflammationem. Literally, the term relates to the word "flame", as the property of being "set on fire" or "to burn".
The term inflammation is not a synonym for infection. Infection describes the interaction between the action of microbial invasion and the reaction of the body's inflammatory response—the two components are considered together in discussion of infection, and the word is used to imply a microbial invasive cause for the observed inflammatory reaction. Inflammation, on the other hand, describes just the body's immunovascular response, regardless of cause. But, because the two are often correlated, words ending in the suffix -itis (which means inflammation) are sometimes informally described as referring to infection: for example, the word urethritis strictly means only "urethral inflammation", but clinical health care providers usually discuss urethritis as a urethral infection because urethral microbial invasion is the most common cause of urethritis. However, the inflammation–infection distinction is crucial in situations in pathology and medical diagnosis that involve inflammation that is not driven by microbial invasion, such as cases of atherosclerosis, trauma, ischemia, and autoimmune diseases (including type III hypersensitivity).
Causes
Types
Appendicitis
Bursitis
Colitis
Cystitis
Dermatitis
Epididymitis
Encephalitis
Gingivitis
Meningitis
Myelitis
Myocarditis
Nephritis
Neuritis
Pancreatitis
Periodontitis
Pharyngitis
Phlebitis
Prostatitis
RSD/CRPS
Rhinitis
Sinusitis
Tendonitis
Tonsillitis
Urethritis
Vasculitis
Vaginitis
Acute
Acute inflammation is a short-term process, usually appearing within a few minutes or hours and begins to cease upon the removal of the injurious stimulus. It involves a coordinated and systemic mobilization response locally of various immune, endocrine and neurological mediators of acute inflammation. In a normal healthy response, it becomes activated, clears the pathogen and begins a repair process and then ceases.
Acute inflammation occurs immediately upon injury, lasting only a few days. Cytokines and chemokines promote the migration of neutrophils and macrophages to the site of inflammation. Pathogens, allergens, toxins, burns, and frostbite are some of the typical causes of acute inflammation. Toll-like receptors (TLRs) recognize microbial pathogens. Acute inflammation can be a defensive mechanism to protect tissues against injury. Inflammation lasting 2–6 weeks is designated subacute inflammation.
Cardinal signs
Inflammation is characterized by five cardinal signs, (the traditional names of which come from Latin):
Dolor (pain)
Calor (heat)
Rubor (redness)
Tumor (swelling)
Functio laesa (loss of function)
The first four (classical signs) were described by Celsus (–38 AD).
Pain is due to the release of chemicals such as bradykinin and histamine that stimulate nerve endings. Acute inflammation of the lung (usually in response to pneumonia) does not cause pain unless the inflammation involves the parietal pleura, which does have pain-sensitive nerve endings. Heat and redness are due to increased blood flow at body core temperature to the inflamed site. Swelling is caused by accumulation of fluid.
Loss of function
The fifth sign, loss of function, is believed to have been added later by Galen, Thomas Sydenham or Rudolf Virchow. Examples of loss of function include pain that inhibits mobility, severe swelling that prevents movement, having a worse sense of smell during a cold, or having difficulty breathing when bronchitis is present. Loss of function has multiple causes.
Acute process
The process of acute inflammation is initiated by resident immune cells already present in the involved tissue, mainly resident macrophages, dendritic cells, histiocytes, Kupffer cells and mast cells. These cells possess surface receptors known as pattern recognition receptors (PRRs), which recognize (i.e., bind) two subclasses of molecules: pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs). PAMPs are compounds that are associated with various pathogens, but which are distinguishable from host molecules. DAMPs are compounds that are associated with host-related injury and cell damage.
At the onset of an infection, burn, or other injuries, these cells undergo activation (one of the PRRs recognize a PAMP or DAMP) and release inflammatory mediators responsible for the clinical signs of inflammation. Vasodilation and its resulting increased blood flow causes the redness (rubor) and increased heat (calor). Increased permeability of the blood vessels results in an exudation (leakage) of plasma proteins and fluid into the tissue (edema), which manifests itself as swelling (tumor). Some of the released mediators such as bradykinin increase the sensitivity to pain (hyperalgesia, dolor). The mediator molecules also alter the blood vessels to permit the migration of leukocytes, mainly neutrophils and macrophages, to flow out of the blood vessels (extravasation) and into the tissue. The neutrophils migrate along a chemotactic gradient created by the local cells to reach the site of injury. The loss of function (functio laesa) is probably the result of a neurological reflex in response to pain.
In addition to cell-derived mediators, several acellular biochemical cascade systems—consisting of preformed plasma proteins—act in parallel to initiate and propagate the inflammatory response. These include the complement system activated by bacteria and the coagulation and fibrinolysis systems activated by necrosis (e.g., burn, trauma).
Acute inflammation may be regarded as the first line of defense against injury. Acute inflammatory response requires constant stimulation to be sustained. Inflammatory mediators are short-lived and are quickly degraded in the tissue. Hence, acute inflammation begins to cease once the stimulus has been removed.
Chronic
Chronic inflammation is inflammation that lasts for months or years. Macrophages, lymphocytes, and plasma cells predominate in chronic inflammation, in contrast to the neutrophils that predominate in acute inflammation. Diabetes, cardiovascular disease, allergies, and chronic obstructive pulmonary disease are examples of diseases mediated by chronic inflammation. Obesity, smoking, stress and insufficient diet are some of the factors that promote chronic inflammation.
Cardinal signs
Common signs and symptoms that develop during chronic inflammation are:
Body pain, arthralgia, myalgia
Chronic fatigue and insomnia
Depression, anxiety and mood disorders
Gastrointestinal complications such as constipation, diarrhea, and acid reflux
Weight gain or loss
Frequent infections
Vascular component
Vasodilation and increased permeability
As defined, acute inflammation is an immunovascular response to inflammatory stimuli, which can include infection or trauma. This means acute inflammation can be broadly divided into a vascular phase that occurs first, followed by a cellular phase involving immune cells (more specifically myeloid granulocytes in the acute setting). The vascular component of acute inflammation involves the movement of plasma fluid, containing important proteins such as fibrin and immunoglobulins (antibodies), into inflamed tissue.
Upon contact with PAMPs, tissue macrophages and mastocytes release vasoactive amines such as histamine and serotonin, as well as eicosanoids such as prostaglandin E2 and leukotriene B4 to remodel the local vasculature. Macrophages and endothelial cells release nitric oxide. These mediators vasodilate and permeabilize the blood vessels, which results in the net distribution of blood plasma from the vessel into the tissue space. The increased collection of fluid into the tissue causes it to swell (edema). This exuded tissue fluid contains various antimicrobial mediators from the plasma such as complement, lysozyme, antibodies, which can immediately deal damage to microbes, and opsonise the microbes in preparation for the cellular phase. If the inflammatory stimulus is a lacerating wound, exuded platelets, coagulants, plasmin and kinins can clot the wounded area using vitamin K-dependent mechanisms and provide haemostasis in the first instance. These clotting mediators also provide a structural staging framework at the inflammatory tissue site in the form of a fibrin lattice – as would construction scaffolding at a construction site – for the purpose of aiding phagocytic debridement and wound repair later on. Some of the exuded tissue fluid is also funneled by lymphatics to the regional lymph nodes, flushing bacteria along to start the recognition and attack phase of the adaptive immune system.
Acute inflammation is characterized by marked vascular changes, including vasodilation, increased permeability and increased blood flow, which are induced by the actions of various inflammatory mediators. Vasodilation occurs first at the arteriole level, progressing to the capillary level, and brings about a net increase in the amount of blood present, causing the redness and heat of inflammation. Increased permeability of the vessels results in the movement of plasma into the tissues, with resultant stasis due to the increase in the concentration of the cells within blood – a condition characterized by enlarged vessels packed with cells. Stasis allows leukocytes to marginate (move) along the endothelium, a process critical to their recruitment into the tissues. Normal flowing blood prevents this, as the shearing force along the periphery of the vessels moves cells in the blood into the middle of the vessel.
Plasma cascade systems
The complement system, when activated, creates a cascade of chemical reactions that promotes opsonization, chemotaxis, and agglutination, and produces the MAC.
The kinin system generates proteins capable of sustaining vasodilation and other physical inflammatory effects.
The coagulation system or clotting cascade, which forms a protective protein mesh over sites of injury.
The fibrinolysis system, which acts in opposition to the coagulation system, to counterbalance clotting and generate several other inflammatory mediators.
Plasma-derived mediators
Cellular component
The cellular component involves leukocytes, which normally reside in blood and must move into the inflamed tissue via extravasation to aid in inflammation. Some act as phagocytes, ingesting bacteria, viruses, and cellular debris. Others release enzymatic granules that damage pathogenic invaders. Leukocytes also release inflammatory mediators that develop and maintain the inflammatory response. In general, acute inflammation is mediated by granulocytes, whereas chronic inflammation is mediated by mononuclear cells such as monocytes and lymphocytes.
Leukocyte extravasation
Various leukocytes, particularly neutrophils, are critically involved in the initiation and maintenance of inflammation. These cells must be able to move to the site of injury from their usual location in the blood, therefore mechanisms exist to recruit and direct leukocytes to the appropriate place. The process of leukocyte movement from the blood to the tissues through the blood vessels is known as extravasation and can be broadly divided up into a number of steps:
Leukocyte margination and endothelial adhesion: The white blood cells within the vessels which are generally centrally located move peripherally towards the walls of the vessels. Activated macrophages in the tissue release cytokines such as IL-1 and TNFα, which in turn leads to production of chemokines that bind to proteoglycans forming gradient in the inflamed tissue and along the endothelial wall. Inflammatory cytokines induce the immediate expression of P-selectin on endothelial cell surfaces and P-selectin binds weakly to carbohydrate ligands on the surface of leukocytes and causes them to "roll" along the endothelial surface as bonds are made and broken. Cytokines released from injured cells induce the expression of E-selectin on endothelial cells, which functions similarly to P-selectin. Cytokines also induce the expression of integrin ligands such as ICAM-1 and VCAM-1 on endothelial cells, which mediate the adhesion and further slow leukocytes down. These weakly bound leukocytes are free to detach if not activated by chemokines produced in injured tissue after signal transduction via respective G protein-coupled receptors that activates integrins on the leukocyte surface for firm adhesion. Such activation increases the affinity of bound integrin receptors for ICAM-1 and VCAM-1 on the endothelial cell surface, firmly binding the leukocytes to the endothelium.
Migration across the endothelium, known as transmigration, via the process of diapedesis: Chemokine gradients stimulate the adhered leukocytes to move between adjacent endothelial cells. The endothelial cells retract and the leukocytes pass through the basement membrane into the surrounding tissue using adhesion molecules such as ICAM-1.
Movement of leukocytes within the tissue via chemotaxis: Leukocytes reaching the tissue interstitium bind to extracellular matrix proteins via expressed integrins and CD44 to prevent them from leaving the site. A variety of molecules behave as chemoattractants, for example, C3a or C5a (the anaphylatoxins), and cause the leukocytes to move along a chemotactic gradient towards the source of inflammation.
Phagocytosis
Extravasated neutrophils in the cellular phase come into contact with microbes at the inflamed tissue. Phagocytes express cell-surface endocytic pattern recognition receptors (PRRs) that have affinity and efficacy against non-specific microbe-associated molecular patterns (PAMPs). Most PAMPs that bind to endocytic PRRs and initiate phagocytosis are cell wall components, including complex carbohydrates such as mannans and β-glucans, lipopolysaccharides (LPS), peptidoglycans, and surface proteins. Endocytic PRRs on phagocytes reflect these molecular patterns, with C-type lectin receptors binding to mannans and β-glucans, and scavenger receptors binding to LPS.
Upon endocytic PRR binding, actin-myosin cytoskeletal rearrangement adjacent to the plasma membrane occurs in a way that endocytoses the plasma membrane containing the PRR-PAMP complex, and the microbe. Phosphatidylinositol and Vps34-Vps15-Beclin1 signalling pathways have been implicated to traffic the endocytosed phagosome to intracellular lysosomes, where fusion of the phagosome and the lysosome produces a phagolysosome. The reactive oxygen species, superoxides and hypochlorite bleach within the phagolysosomes then kill microbes inside the phagocyte.
Phagocytic efficacy can be enhanced by opsonization. Plasma derived complement C3b and antibodies that exude into the inflamed tissue during the vascular phase bind to and coat the microbial antigens. As well as endocytic PRRs, phagocytes also express opsonin receptors Fc receptor and complement receptor 1 (CR1), which bind to antibodies and C3b, respectively. The co-stimulation of endocytic PRR and opsonin receptor increases the efficacy of the phagocytic process, enhancing the lysosomal elimination of the infective agent.
Cell-derived mediators
Morphologic patterns
Specific patterns of acute and chronic inflammation are seen during particular situations that arise in the body, such as when inflammation occurs on an epithelial surface, or pyogenic bacteria are involved.
Granulomatous inflammation: Characterised by the formation of granulomas, they are the result of a limited but diverse number of diseases, which include among others tuberculosis, leprosy, sarcoidosis, and syphilis.
Fibrinous inflammation: Inflammation resulting in a large increase in vascular permeability allows fibrin to pass through the blood vessels. If an appropriate procoagulative stimulus is present, such as cancer cells, a fibrinous exudate is deposited. This is commonly seen in serous cavities, where the conversion of fibrinous exudate into a scar can occur between serous membranes, limiting their function. The deposit sometimes forms a pseudomembrane sheet. During inflammation of the intestine (pseudomembranous colitis), pseudomembranous tubes can be formed.
Purulent inflammation: Inflammation resulting in large amount of pus, which consists of neutrophils, dead cells, and fluid. Infection by pyogenic bacteria such as staphylococci is characteristic of this kind of inflammation. Large, localised collections of pus enclosed by surrounding tissues are called abscesses.
Serous inflammation: Characterised by the copious effusion of non-viscous serous fluid, commonly produced by mesothelial cells of serous membranes, but may be derived from blood plasma. Skin blisters exemplify this pattern of inflammation.
Ulcerative inflammation: Inflammation occurring near an epithelium can result in the necrotic loss of tissue from the surface, exposing lower layers. The subsequent excavation in the epithelium is known as an ulcer.
Disorders
Inflammatory abnormalities are a large group of disorders that underlie a vast variety of human diseases. The immune system is often involved with inflammatory disorders, as demonstrated in both allergic reactions and some myopathies, with many immune system disorders resulting in abnormal inflammation. Non-immune diseases with causal origins in inflammatory processes include cancer, atherosclerosis, and ischemic heart disease.
Examples of disorders associated with inflammation include:
Acne vulgaris
Asthma
Autoimmune diseases
Autoinflammatory diseases
Celiac disease
Chronic prostatitis
Colitis
Diverticulitis
Familial Mediterranean Fever
Glomerulonephritis
Hidradenitis suppurativa
Hypersensitivities
Inflammatory bowel diseases
Interstitial cystitis
Lichen planus
Mast Cell Activation Syndrome
Mastocytosis
Otitis
Pelvic inflammatory disease
Peripheral ulcerative keratitis
Pneumonia
Reperfusion injury
Rheumatic fever
Rheumatoid arthritis
Rhinitis
Sarcoidosis
Transplant rejection
Vasculitis
Atherosclerosis
Atherosclerosis, formerly considered a lipid storage disorder, is now understood as a chronic inflammatory condition involving the arterial walls. Research has established a fundamental role for inflammation in mediating all stages of atherosclerosis from initiation through progression and, ultimately, the thrombotic complications from it. These new findings reveal links between traditional risk factors like cholesterol levels and the underlying mechanisms of atherogenesis.
Clinical studies have shown that this emerging biology of inflammation in atherosclerosis applies directly to people. For instance, elevation in markers of inflammation predicts outcomes of people with acute coronary syndromes, independently of myocardial damage. In addition, low-grade chronic inflammation, as indicated by levels of the inflammatory marker C-reactive protein, prospectively defines risk of atherosclerotic complications, thus adding to prognostic information provided by traditional risk factors, such as LDL levels.
Moreover, certain treatments that reduce coronary risk also limit inflammation. Notably, lipid-lowering medications such as statins have shown anti-inflammatory effects, which may contribute to their efficacy beyond just lowering LDL levels. This emerging understanding of inflammation's role in atherosclerosis has had significant clinical implications, influencing both risk stratification and therapeutic strategies.
Emerging treatments
Recent developments in the treatment of atherosclerosis have focused on addressing inflammation directly. New anti-inflammatory drugs, such as monoclonal antibodies targeting IL-1β, have been studied in large clinical trials, showing promising results in reducing cardiovascular events. These drugs offer a potential new avenue for treatment, particularly for patients who do not respond adequately to statins. However, concerns about long-term safety and cost remain significant barriers to widespread adoption.
Connection to depression
Inflammatory processes can be triggered by negative cognition or their consequences, such as stress, violence, or deprivation. Negative cognition may therefore contribute to inflammation, which in turn can lead to depression. A 2019 meta-analysis found that chronic inflammation is associated with a 30% increased risk of developing major depressive disorder, supporting the link between inflammation and mental health.
Allergy
An allergic reaction, formally known as type 1 hypersensitivity, is the result of an inappropriate immune response triggering inflammation, vasodilation, and nerve irritation. A common example is hay fever, which is caused by a hypersensitive response by mast cells to allergens. Pre-sensitised mast cells respond by degranulating, releasing vasoactive chemicals such as histamine. These chemicals propagate an excessive inflammatory response characterised by blood vessel dilation, production of pro-inflammatory molecules, cytokine release, and recruitment of leukocytes. Severe inflammatory response may mature into a systemic response known as anaphylaxis.
Myopathies
Inflammatory myopathies are caused by the immune system inappropriately attacking components of muscle, leading to signs of muscle inflammation. They may occur in conjunction with other immune disorders, such as systemic sclerosis, and include dermatomyositis, polymyositis, and inclusion body myositis.
Leukocyte defects
Due to the central role of leukocytes in the development and propagation of inflammation, defects in leukocyte functionality often result in a decreased capacity for inflammatory defense with subsequent vulnerability to infection. Dysfunctional leukocytes may be unable to correctly bind to blood vessels due to surface receptor mutations, digest bacteria (Chédiak–Higashi syndrome), or produce microbicides (chronic granulomatous disease). In addition, diseases affecting the bone marrow may result in abnormal or few leukocytes.
Pharmacological
Certain drugs or exogenous chemical compounds are known to affect inflammation. Vitamin A deficiency, for example, causes an increase in inflammatory responses, and anti-inflammatory drugs work specifically by inhibiting the enzymes that produce inflammatory eicosanoids. Additionally, certain illicit drugs such as cocaine and ecstasy may exert some of their detrimental effects by activating transcription factors intimately involved with inflammation (e.g. NF-κB).
Cancer
Inflammation orchestrates the microenvironment around tumours, contributing to proliferation, survival and migration. Cancer cells use selectins, chemokines and their receptors for invasion, migration and metastasis. On the other hand, many cells of the immune system contribute to cancer immunology, suppressing cancer.
Molecular intersection between receptors of steroid hormones, which have important effects on cellular development, and transcription factors that play key roles in inflammation, such as NF-κB, may mediate some of the most critical effects of inflammatory stimuli on cancer cells. This capacity of a mediator of inflammation to influence the effects of steroid hormones in cells is very likely to affect carcinogenesis. On the other hand, due to the modular nature of many steroid hormone receptors, this interaction may offer ways to interfere with cancer progression, through targeting of a specific protein domain in a specific cell type. Such an approach may limit side effects that are unrelated to the tumor of interest, and may help preserve vital homeostatic functions and developmental processes in the organism.
There is some evidence from 2009 to suggest that cancer-related inflammation (CRI) may lead to accumulation of random genetic alterations in cancer cells.
Role in cancer
In 1863, Rudolf Virchow hypothesized that the origin of cancer was at sites of chronic inflammation. As of 2012, chronic inflammation was estimated to contribute to approximately 15% to 25% of human cancers.
Mediators and DNA damage in cancer
An inflammatory mediator is a messenger that acts on blood vessels and/or cells to promote an inflammatory response. Inflammatory mediators that contribute to neoplasia include prostaglandins, inflammatory cytokines such as IL-1β, TNF-α, IL-6 and IL-15 and chemokines such as IL-8 and GRO-alpha. These inflammatory mediators, and others, orchestrate an environment that fosters proliferation and survival.
Inflammation also causes DNA damages due to the induction of reactive oxygen species (ROS) by various intracellular inflammatory mediators. In addition, leukocytes and other phagocytic cells attracted to the site of inflammation induce DNA damages in proliferating cells through their generation of ROS and reactive nitrogen species (RNS). ROS and RNS are normally produced by these cells to fight infection. ROS, alone, cause more than 20 types of DNA damage. Oxidative DNA damages cause both mutations and epigenetic alterations. RNS also cause mutagenic DNA damages.
A normal cell may undergo carcinogenesis to become a cancer cell if it is frequently subjected to DNA damage during long periods of chronic inflammation. DNA damages may cause genetic mutations due to inaccurate repair. In addition, mistakes in the DNA repair process may cause epigenetic alterations. Mutations and epigenetic alterations that are replicated and provide a selective advantage during somatic cell proliferation may be carcinogenic.
Genome-wide analyses of human cancer tissues reveal that a single typical cancer cell may possess roughly 100 mutations in coding regions, 10–20 of which are "driver mutations" that contribute to cancer development. However, chronic inflammation also causes epigenetic changes such as DNA methylations, that are often more common than mutations. Typically, several hundreds to thousands of genes are methylated in a cancer cell (see DNA methylation in cancer). Sites of oxidative damage in chromatin can recruit complexes that contain DNA methyltransferases (DNMTs), a histone deacetylase (SIRT1), and a histone methyltransferase (EZH2), and thus induce DNA methylation. DNA methylation of a CpG island in a promoter region may cause silencing of its downstream gene (see CpG site and regulation of transcription in cancer). DNA repair genes, in particular, are frequently inactivated by methylation in various cancers (see hypermethylation of DNA repair genes in cancer). A 2018 report evaluated the relative importance of mutations and epigenetic alterations in progression to two different types of cancer. This report showed that epigenetic alterations were much more important than mutations in generating gastric cancers (associated with inflammation). However, mutations and epigenetic alterations were of roughly equal importance in generating esophageal squamous cell cancers (associated with tobacco chemicals and acetaldehyde, a product of alcohol metabolism).
HIV and AIDS
It has long been recognized that infection with HIV is characterized not only by development of profound immunodeficiency but also by sustained inflammation and immune activation. A substantial body of evidence implicates chronic inflammation as a critical driver of immune dysfunction, premature appearance of aging-related diseases, and immune deficiency. Many now regard HIV infection not only as an evolving virus-induced immunodeficiency, but also as chronic inflammatory disease. Even after the introduction of effective antiretroviral therapy (ART) and effective suppression of viremia in HIV-infected individuals, chronic inflammation persists. Animal studies also support the relationship between immune activation and progressive cellular immune deficiency: SIVsm infection of its natural nonhuman primate hosts, the sooty mangabey, causes high-level viral replication but limited evidence of disease. This lack of pathogenicity is accompanied by a lack of inflammation, immune activation and cellular proliferation. In sharp contrast, experimental SIVsm infection of rhesus macaque produces immune activation and AIDS-like disease with many parallels to human HIV infection.
Delineating how CD4 T cells are depleted and how chronic inflammation and immune activation are induced lies at the heart of understanding HIV pathogenesisone of the top priorities for HIV research by the Office of AIDS Research, National Institutes of Health. Recent studies demonstrated that caspase-1-mediated pyroptosis, a highly inflammatory form of programmed cell death, drives CD4 T-cell depletion and inflammation by HIV. These are the two signature events that propel HIV disease progression to AIDS. Pyroptosis appears to create a pathogenic vicious cycle in which dying CD4 T cells and other immune cells (including macrophages and neutrophils) release inflammatory signals that recruit more cells into the infected lymphoid tissues to die. The feed-forward nature of this inflammatory response produces chronic inflammation and tissue injury. Identifying pyroptosis as the predominant mechanism that causes CD4 T-cell depletion and chronic inflammation, provides novel therapeutic opportunities, namely caspase-1 which controls the pyroptotic pathway. In this regard, pyroptosis of CD4 T cells and secretion of pro-inflammatory cytokines such as IL-1β and IL-18 can be blocked in HIV-infected human lymphoid tissues by addition of the caspase-1 inhibitor VX-765, which has already proven to be safe and well tolerated in phase II human clinical trials. These findings could propel development of an entirely new class of "anti-AIDS" therapies that act by targeting the host rather than the virus. Such agents would almost certainly be used in combination with ART. By promoting "tolerance" of the virus instead of suppressing its replication, VX-765 or related drugs may mimic the evolutionary solutions occurring in multiple monkey hosts (e.g. the sooty mangabey) infected with species-specific lentiviruses that have led to a lack of disease, no decline in CD4 T-cell counts, and no chronic inflammation.
Resolution
The inflammatory response must be actively terminated when no longer needed to prevent unnecessary "bystander" damage to tissues. Failure to do so results in chronic inflammation, and cellular destruction. Resolution of inflammation occurs by different mechanisms in different tissues.
Mechanisms that serve to terminate inflammation include:
Connection to depression
There is evidence for a link between inflammation and depression. Inflammatory processes can be triggered by negative cognitions or their consequences, such as stress, violence, or deprivation. Thus, negative cognitions can cause inflammation that can, in turn, lead to depression.
In addition, there is increasing evidence that inflammation can cause depression because of the increase of cytokines, setting the brain into a "sickness mode".
Classical symptoms of being physically sick, such as lethargy, show a large overlap in behaviors that characterize depression. Levels of cytokines tend to increase sharply during the depressive episodes of people with bipolar disorder and drop off during remission. Furthermore, it has been shown in clinical trials that anti-inflammatory medicines taken in addition to antidepressants not only significantly improves symptoms but also increases the proportion of subjects positively responding to treatment.
Inflammations that lead to serious depression could be caused by common infections such as those caused by a virus, bacteria or even parasites.
Connection to delirium
There is evidence for a link between inflammation and delirium based on the results of a recent longitudinal study investigating CRP in COVID-19 patients.
Systemic effects
An infectious organism can escape the confines of the immediate tissue via the circulatory system or lymphatic system, where it may spread to other parts of the body. If an organism is not contained by the actions of acute inflammation, it may gain access to the lymphatic system via nearby lymph vessels. An infection of the lymph vessels is known as lymphangitis, and infection of a lymph node is known as lymphadenitis. When lymph nodes cannot destroy all pathogens, the infection spreads further. A pathogen can gain access to the bloodstream through lymphatic drainage into the circulatory system.
When inflammation overwhelms the host, systemic inflammatory response syndrome is diagnosed. When it is due to infection, the term sepsis is applied, with the terms bacteremia being applied specifically for bacterial sepsis and viremia specifically to viral sepsis. Vasodilation and organ dysfunction are serious problems associated with widespread infection that may lead to septic shock and death.
Acute-phase proteins
Inflammation also is characterized by high systemic levels of acute-phase proteins. In acute inflammation, these proteins prove beneficial; however, in chronic inflammation, they can contribute to amyloidosis. These proteins include C-reactive protein, serum amyloid A, and serum amyloid P, which cause a range of systemic effects including:
Fever
Increased blood pressure
Decreased sweating
Malaise
Loss of appetite
Somnolence
Leukocyte numbers
Inflammation often affects the numbers of leukocytes present in the body:
Leukocytosis is often seen during inflammation induced by infection, where it results in a large increase in the amount of leukocytes in the blood, especially immature cells. Leukocyte numbers usually increase to between 15 000 and 20 000 cells per microliter, but extreme cases can see it approach 100 000 cells per microliter. Bacterial infection usually results in an increase of neutrophils, creating neutrophilia, whereas diseases such as asthma, hay fever, and parasite infestation result in an increase in eosinophils, creating eosinophilia.
Leukopenia can be induced by certain infections and diseases, including viral infection, Rickettsia infection, some protozoa, tuberculosis, and some cancers.
Interleukins and obesity
With the discovery of interleukins (IL), the concept of systemic inflammation developed. Although the processes involved are identical to tissue inflammation, systemic inflammation is not confined to a particular tissue but involves the endothelium and other organ systems.
Chronic inflammation is widely observed in obesity. Obese people commonly have many elevated markers of inflammation, including:
IL-6 (Interleukin-6)
Low-grade chronic inflammation is characterized by a two- to threefold increase in the systemic concentrations of cytokines such as TNF-α, IL-6, and CRP. Waist circumference correlates significantly with systemic inflammatory response.
Loss of white adipose tissue reduces levels of inflammation markers. As of 2017 the association of systemic inflammation with insulin resistance and type 2 diabetes, and with atherosclerosis was under preliminary research, although rigorous clinical trials had not been conducted to confirm such relationships.
C-reactive protein (CRP) is generated at a higher level in obese people, and may increase the risk for cardiovascular diseases.
Outcomes
The outcome in a particular circumstance will be determined by the tissue in which the injury has occurred—and the injurious agent that is causing it. Here are the possible outcomes to inflammation:
ResolutionThe complete restoration of the inflamed tissue back to a normal status. Inflammatory measures such as vasodilation, chemical production, and leukocyte infiltration cease, and damaged parenchymal cells regenerate. Such is usually the outcome when limited or short-lived inflammation has occurred.
FibrosisLarge amounts of tissue destruction, or damage in tissues unable to regenerate, cannot be regenerated completely by the body. Fibrous scarring occurs in these areas of damage, forming a scar composed primarily of collagen. The scar will not contain any specialized structures, such as parenchymal cells, hence functional impairment may occur.
Abscess formationA cavity is formed containing pus, an opaque liquid containing dead white blood cells and bacteria with general debris from destroyed cells.
Chronic inflammationIn acute inflammation, if the injurious agent persists then chronic inflammation will ensue. This process, marked by inflammation lasting many days, months or even years, may lead to the formation of a chronic wound. Chronic inflammation is characterised by the dominating presence of macrophages in the injured tissue. These cells are powerful defensive agents of the body, but the toxins they release—including reactive oxygen species—are injurious to the organism's own tissues as well as invading agents. As a consequence, chronic inflammation is almost always accompanied by tissue destruction.
Examples
Inflammation is usually indicated by adding the suffix "itis", as shown below. However, some conditions, such as asthma and pneumonia, do not follow this convention. More examples are available at List of types of inflammation.
| Biology and health sciences | Injury | null |
70547 | https://en.wikipedia.org/wiki/Breast%20cancer | Breast cancer | Breast cancer is a cancer that develops from breast tissue. Signs of breast cancer may include a lump in the breast, a change in breast shape, dimpling of the skin, milk rejection, fluid coming from the nipple, a newly inverted nipple, or a red or scaly patch of skin. In those with distant spread of the disease, there may be bone pain, swollen lymph nodes, shortness of breath, or yellow skin.
Risk factors for developing breast cancer include obesity, a lack of physical exercise, alcohol consumption, hormone replacement therapy during menopause, ionizing radiation, an early age at first menstruation, having children late in life (or not at all), older age, having a prior history of breast cancer, and a family history of breast cancer. About five to ten percent of cases are the result of an inherited genetic predisposition, including BRCA mutations among others. Breast cancer most commonly develops in cells from the lining of milk ducts and the lobules that supply these ducts with milk. Cancers developing from the ducts are known as ductal carcinomas, while those developing from lobules are known as lobular carcinomas. There are more than 18 other sub-types of breast cancer. Some, such as ductal carcinoma in situ, develop from pre-invasive lesions. The diagnosis of breast cancer is confirmed by taking a biopsy of the concerning tissue. Once the diagnosis is made, further tests are carried out to determine if the cancer has spread beyond the breast and which treatments are most likely to be effective.
Breast cancer screening can be instrumental, given that the size of a breast cancer and its spread are among the most critical factors in predicting the prognosis of the disease. Breast cancers found during screening are typically smaller and less likely to have spread outside the breast. A 2013 Cochrane review found that it was unclear whether mammographic screening does more harm than good, in that a large proportion of women who test positive turn out not to have the disease. A 2009 review for the US Preventive Services Task Force found evidence of benefit in those 40 to 70 years of age, and the organization recommends screening every two years in women 50 to 74 years of age. The medications tamoxifen or raloxifene may be used in an effort to prevent breast cancer in those who are at high risk of developing it. Surgical removal of both breasts is another preventive measure in some high risk women. In those who have been diagnosed with cancer, a number of treatments may be used, including surgery, radiation therapy, chemotherapy, hormonal therapy, and targeted therapy. Types of surgery vary from breast-conserving surgery to mastectomy. Breast reconstruction may take place at the time of surgery or at a later date. In those in whom the cancer has spread to other parts of the body, treatments are mostly aimed at improving quality of life and comfort.
Outcomes for breast cancer vary depending on the cancer type, the extent of disease, and the person's age. The five-year survival rates in England and the United States are between 80 and 90%. In developing countries, five-year survival rates are lower. Worldwide, breast cancer is the leading type of cancer in women, accounting for 25% of all cases. In 2018, it resulted in two million new cases and 627,000 deaths. It is more common in developed countries, and is more than 100 times more common in women than in men. For transgender individuals on gender-affirming hormone therapy, breast cancer is 5 times more common in cisgender women than in transgender men, and 46 times more common in transgender women than in cisgender men.
Signs and symptoms
Most people with breast cancer have no symptoms at the time of diagnosis; their tumor is detected by a breast cancer screening test. For those who do have symptoms, a new lump in the breast is most common. Most breast lumps are not cancer, though lumps that are painless, hard, and with irregular edges are more likely to be cancerous. Other symptoms include swelling or pain in the breast; dimpling, thickening, redness, or dryness of the breast skin; and pain, or inversion of the nipple. Some may experience unusual discharge from the breasts, or swelling of the lymph nodes under the arms or along the collar bone.
Some less common forms of breast cancer cause distinctive symptoms. Up to 5% of people with breast cancer have inflammatory breast cancer, where cancer cells block the lymph vessels of one breast, causing the breast to substantially swell and redden over three to six months. Up to 3% of people with breast cancer have Paget's disease of the breast, with eczema-like red, scaly irritation on the nipple and areola.
Advanced tumors can spread (metastasize) beyond the breast, most commonly to the bones, liver, lungs, and brain. Bone metastases can cause swelling, progressive bone pain, and weakening of the bones that leads to fractures. Liver metastases can cause abdominal pain, nausea, vomiting, and skin problems – rash, itchy skin, or yellowing of the skin (jaundice). Those with lung metastases experience chest pain, shortness of breath, and regular coughing. Metastases in the brain can cause persistent headache, seizures, nausea, vomiting, and disruptions to the affected person's speech, vision, memory, and regular behavior.
Screening
Breast cancer screening refers to testing otherwise-healthy women for breast cancer in an attempt to diagnose breast tumors early when treatments are more successful. The most common screening test for breast cancer is low-dose X-ray imaging of the breast, called mammography. Each breast is pressed between two plates and imaged. Tumors can appear unusually dense within the breast, distort the shape of surrounding tissue, or cause small dense flecks called microcalcifications. Radiologists generally report mammogram results on a standardized scale – the six-point Breast Imaging-Reporting and Data System (BI-RADS) is the most common globally – where a higher number corresponds to a greater risk of a cancerous tumor.
A mammogram also reveals breast density; dense breast tissue appears opaque on a mammogram and can obscure tumors. BI-RADS categorizes breast density into four categories. Mammography can detect around 90% of breast tumors in the least dense breasts (called "fatty" breasts), but just 60% in the most dense breasts (called "extremely dense"). Women with particularly dense breasts can instead be screened by ultrasound, magnetic resonance imaging (MRI), or tomosynthesis, all of which more sensitively detect breast tumors.
Regular screening mammography reduces breast cancer deaths by at least 20%. Most medical guidelines recommend annual screening mammograms for women aged 50–70. Screening also reduces breast cancer mortality in women aged 40–49, and some guidelines recommend annual screening in this age group as well. For women at high risk for developing breast cancer, most guidelines recommend adding MRI screening to mammography, to increase the chance of detecting potentially dangerous tumors. Regularly feeling one's own breasts for lumps or other abnormalities, called breast self-examination, does not reduce a person's chance of dying from breast cancer. Clinical breast exams, where a health professional feels the breasts for abnormalities, are common; whether they reduce the risk of dying from breast cancer is not known. Regular breast cancer screening is commonplace in most wealthy nations, but remains uncommon in the world's poorer countries.
Still, mammography has its disadvantages. Overall, screening mammograms miss about 1 in 8 breast cancers, they can also give false-positive results, causing extra anxiety and making patients overgo unnecessary additional exams, such as bioposies.
Diagnosis
Those who have a suspected tumor from a mammogram or physical exam first undergo additional imaging – typically a second "diagnostic" mammogram and ultrasound – to confirm its presence and location. A biopsy is then taken of the suspected tumor. Breast biopsy is typically done by core needle biopsy, with a hollow needle used to collect tissue from the area of interest. Suspected tumors that appear to be filled with fluid are often instead sampled by fine-needle aspiration. Around 10–20% of breast biopsies are positive for cancer. Most biopsied breast masses are instead caused by fibrocystic breast changes, a term that encompasses benign pockets of fluid, cell growth, or fibrous tissue.
Classification
Breast cancers are classified by several grading systems, each of which assesses a tumor characteristic that impacts a person's prognosis. First, a tumor is classified by the tissue it arises from, or the appearance of the tumor tissue under a microscope. Most breast cancers (85%) are ductal carcinoma – derived from the lining of the mammary ducts. 10% are lobular carcinoma – derived from the mammary lobes – or mixed ductal/lobular carcinoma. Rarer types include mucinous carcinoma (around 2.5% of cases; surrounded by mucin), tubular carcinoma (1.5%; full of small tubes of epithelial cells), medullary carcinoma (1%; resembling "medullary" or middle-layer tissue), and papillary carcinoma (1%; covered in finger-like growths). Oftentimes a biopsy reveals cells that are cancerous but have not yet spread beyond their original location. This condition, called carcinoma in situ, is often considered "precancerous" rather than a dangerous cancer itself. Those with ductal carcinoma in situ (in the mammary ducts) are at increased risk for developing true invasive breast cancer – around a third develop breast cancer within five years. Lobular carcinoma in situ (in the mammary lobes) rarely causes a noticeable lump, and is often found incidentally during a biopsy for another reason. It is commonly spread throughout both breasts. Those with lobular carcinoma in situ also have an increased risk of developing breast cancer – around 1% develop breast cancer each year. However, their risk of dying of breast cancer is no higher than the rest of the population.
Invasive tumor tissue is assigned a grade based on how distinct it appears from healthy breast. Breast tumors are graded on three features: the proportion of cancer cells that form tubules, the appearance of the cell nucleus, and how many cells are actively replicating. Each feature is scored on a three-point scale, with a higher score indicating less healthy looking tissue. A grade is assigned based on the sum of the three scores. Combined scores of 3, 4, or 5 represent grade 1, a slower-growing cancer. Scores of 6 or 7 represent grade 2. Scores of 8 or 9 represent grade 3, a faster-growing, more aggressive cancer.
In addition to grading, tumor biopsy samples are tested by immunohistochemistry to determine if the tissue contains the proteins estrogen receptor (ER), progesterone receptor (PR), or human epidermal growth factor receptor 2 (HER2). Tumors containing either ER or PR are called "hormone receptor-positive" and can be treated with hormone therapies. Around 15 to 20% of tumors contain HER2; these can be treated with HER2-targeted therapies. The remainder that do not contain ER, PR, or HER2 are called "triple-negative" tumors, and tend to grow more quickly than other breast cancer types.
After the tumor is evaluated, the breast cancer case is staged using the American Joint Committee on Cancer and Union for International Cancer Control's TNM staging system. Scores are assigned based on characteristics of the tumor (T), lymph nodes (N), and any metastases (M). T scores are determine by the size and extent of the tumor. Tumors less than 2 centimeters (cm) across are designated T1. Tumors 2–5 cm across are T2. A tumor greater than 5 cm across is T3. Tumors that extend to the chest wall or to the skin are designated T4. N scores are based on whether the cancer has spread to nearby lymph nodes. N0 indicates no spread to the lymph nodes. N1 is for tumors that have spread to the closest axillary lymph nodes (called "level I" and "level II" axillary lymph nodes, in the armpit). N2 is for spread to the intramammary lymph nodes (on the other side of the breast, near the chest center), or for axillary lymph nodes that appear attached to each other or to the tissue around them (a sign of more severely affected tissue). N3 designates tumors that have spread to the highest axillary lymph nodes (called "level 3" axillary lymph nodes, above the armpit near the shoulder), to the supraclavicular lymph nodes (along the neck), or to both the axillary and intramammary lymph nodes. The M score is binary: M0 indicates no evidence metastases; M1 indicates metastases have been detected.
TNM scores are then combined with tumor grades and ER/PR/HER2 status to calculate a cancer case's "prognostic stage group". Stage groups range from I (best prognosis) to IV (worst prognosis), with groups I, II, and III further divided into subgroups IA, IB, IIA, IIB, IIIA, IIIB, and IIIC. In general, tumors of higher T and N scores and higher grades are assigned higher stage groups. Tumors that are ER, PR, and HER2 positive are slightly lower stage group than those that are negative. Tumors that have metastasized are stage IV, regardless of the other scored characteristics.
Management
The management of breast cancer depends on the affected person's health, the cancer case's molecular characteristics, and how far the tumor has spread at the time of diagnosis.
Local tumors
Those whose tumors have not spread beyond the breast often undergo surgery to remove the tumor and some surrounding breast tissue. The surgery method is typically chosen to spare as much healthy breast tissue as possible, removing just the tumor (lumpectomy) or a larger part of the breast (partial mastectomy). Those with large or multiple tumors, high genetic risk of subsequent cancers, or who are unable to receive radiation therapy may instead opt for full removal of the affected breast(s) (full mastectomy). To reduce the risk of cancer spreading, women will often have the nearest lymph node removed in a procedure called sentinel lymph node biopsy. Dye is injected near the tumor site, and several hours later the lymph node the dye accumulates in is removed.
After surgery, many undergo radiotherapy to decrease the chance of cancer recurrence. Those who had lumpectomies receive radiation to the whole breast. Those who had a mastectomy and are at elevated risk of tumor spread – tumor greater than five centimeters wide, or cancerous cells in nearby lymph nodes – receive radiation to the mastectomy scar and chest wall. If cancerous cells have spread to nearby lymph nodes, those lymph nodes will be irradiated as well. Radiation is typically given five days per week, for up to seven weeks. Radiotherapy for breast cancer is typically delivered via external beam radiotherapy, where a device focuses radiation beams onto the targeted parts of the body. Instead, some undergo brachytherapy, where radioactive material is placed into a device inserted at the surgical site the tumor was removed from. Fresh radioactive material is added twice a day for five days, then the device is removed. Surgery plus radiation typically eliminates a person's breast tumor. Less than 5% of those treated have their breast tumor grow back. After surgery and radiation, the breast can be surgically reconstructed, either by adding a breast implant or transferring excess tissue from another part of the body.
Chemotherapy reduces the chance of cancer recurring in the next ten years by around a third. However, 1-2% of those on chemotherapy experience life-threatening or permanent side effects. To balance these benefits and risks, chemotherapy is typically offered to those with a higher risk of cancer recurrence. There is no established risk cutoff for offering chemotherapy; determining who should receive chemotherapy is controversial. Chemotherapy drugs are typically given in two- to three-week cycles, with periods of drug treatment interspersed with rest periods to recover from the therapies' side effects. Four to six cycles are given in total. Many classes of chemotherapeutic agents are effective for breast cancer treatment, including the DNA alkylating drugs (cyclophosphamide), anthracyclines (doxorubicin and epirubicin), antimetabolites (fluorouracil, capecitabine, and methotrexate), taxanes (docetaxel and paclitaxel), and platinum-based chemotherapies (cisplatin and carboplatin).
Chemotherapies from different classes are typically given in combination, with particular chemotherapy drugs selected based on the affected person's health and the different chemotherapeutics' side effects. Anthrocyclines and cyclophosphamide cause leukemia in up to 1% of those treated. Anthrocyclines also cause congestive heart failure in around 1% of people treated. Taxanes cause peripheral neuropathy, which is permanent in up to 5% of those treated. The same chemotherapy agents can be given before surgery – called neoadjuvant therapy – to shrink tumors, making them easier to safely remove.
For those whose tumors are HER2-positive, adding the HER2-targeted antibody trastuzumab to chemotherapy reduces the chance of cancer recurrence and death by at least a third. Trastuzumab is given weekly or every three weeks for twelve months. Adding a second HER2-targeted antibody, pertuzumab slightly enhances treatment efficacy. In rare cases, trastuzumab can disrupt heart function, and so it is typically not given in conjunction with anthracyclines, which can also damage the heart.
After their chemotherapy course, those whose tumors are ER-positive or PR-positive benefit from endocrine therapy, which reduces the levels of estrogens and progesterones that hormone receptor-positive breast cancers require to survive. Tamoxifen treatment blocks the ER in the breast and some other tissues, and reduces the risk of breast cancer death by around 40% over the next ten years. Chemically blocking estrogen production with GnRH-targeted drugs (goserelin, leuprolide, or triptorelin) and aromatase inhibitors (anastrozole, letrozole, or exemestane) slightly improves survival, but has more severe side effects. Side effects of estrogen depletion include hot flashes, vaginal discomfort, and muscle and joint pain. Endocrine therapy is typically recommended for at least five years after surgery and chemotherapy, and is sometimes continued for 10 years or longer.
Women with breast cancer who had a lumpectomy or a mastectomy and kept their other breast have similar survival rates to those who had a double mastectomy. There seems to be no survival advantage to removing the other breast, with only a 7% chance of cancer occurring in the other breast over 20 years.
Metastatic disease
For around 1 in 5 people treated for localized breast cancer, their tumors eventually spread to distant body sites – most commonly the nearby bones (67% of cases), liver (41%), lungs (37%), brain (13%), and peritoneum (10%). Those with metastatic disease can receive further chemotherapy, typically starting with capecitabine, an anthracycline, or a taxane. As one chemotherapy drug fails to control the cancer, another is started. In addition to the chemotherapeutic drugs used for localized cancer, gemcitabine, vinorelbine, etoposide, and epothilones are sometimes effective. Those with bone metastases benefit from regular infusion of the bone-strengthening agents denosumab and the bisphosphonates; infusion every three months reduces the chance of bone pain, fractures, and bone hypercalcemia.
Up to 70% of those with ER-positive metastatic breast cancer benefit from additional endocrine therapy. Therapy options include those used in localized cancer, plus toremifene and fulvestrant, often used in combination with CDK4/6 inhibitors (palbociclib, ribociclib, or abemaciclib). When one endocrine therapy fails, most will benefit from transitioning to a second one. Some respond to a third sequential therapy as well. Adding an mTOR inhibitor, everolimus, can further slow the tumors' progression.
Those with HER2-positive metastatic disease can benefit from continued use of trastuzumab, alone, in combination with pertuzumab, or in combination with chemotherapy. Those whose tumors continue to progress on trastuzumab benefit from HER2-targeted antibody drug conjugates (HER2 antibodies linked to chemotherapy drugs) trastuzumab emtansine or trastuzumab deruxtecan. The HER2-targeted antibody margetuximab can also prolong survival, as can HER2 inhibitors lapatinib, neratinib, or tucatinib.
Certain therapies are targeted at those whose tumors have particular gene mutations: Alpelisib or capivasertib for those with mutations activating the protein PIK3CA. PARP inhibitors (olaparib and talazoparib) for those with mutations that inactivate BRCA1 or BRCA2. The immune checkpoint inhibitor antibody atezolizumab for those whose tumors express PD-L1. And the similar immunotherapy pembrolizumab for those whose tumors have mutations in various DNA repair pathways.
Supportive care
Many breast cancer therapies have side effects that can be alleviated with appropriate supportive care. Chemotherapy causes hair loss, nausea, and vomiting in nearly everyone who receives it. Antiemetic drugs can alleviate nausea and vomiting; cooling the scalp with a cold cap during chemotherapy treatments may reduce hair loss. Many complain of cognitive issues during chemotherapy treatment. These usually resolve within a few months of the end of chemotherapy treatment. Those on endocrine therapy often experience hot flashes, muscle and joint pain, and vaginal dryness/discomfort that can lead to issues having sex. Around half of women have their hot flashes alleviated by taking antidepressants; pain can be treated with physical therapy and nonsteroidal anti-inflammatory drugs; counseling and use of personal lubricants can improve sexual issues.
In women with non-metastatic breast cancer, psychological interventions such as cognitive behavioral therapy can have positive effects on outcomes such as cognitive impairment, anxiety, depression and mood disturbance, and can also improve the quality of life. Physical activity interventions, yoga and meditation may also have beneficial effects on health related quality of life, cognitive impairment, anxiety, fitness and physical activity in women with breast cancer following adjuvant therapy.
In-person and virtual peer support groups for patients and survivors of breast cancer can promote quality of life and companionship based on similar lived experiences. The potential benefits of peer support are particularly impactful for women with breast cancer facing additional unique challenges related to ethnicity and socioeconomic status. Peer support groups tailored to adolescents and young adult women can improve coping strategies against age-specific types of distress associated with breast cancer, including post-traumatic stress disorder and body image issues.
Prognosis
Breast cancer prognosis varies widely depending on how far the tumor has spread at the time of diagnosis. Overall, 91% of women diagnosed with breast cancer survive at least five years from diagnosis. Those whose tumor(s) are completely confined to the breast (nearly two thirds of cases) have the best prognoses – over 99% survive at least five years. Those whose tumors have metastasized to distant sites have relatively poor prognoses – 31% survive at least five years from the time of diagnosis. Triple-negative breast cancer (up to 15% of cases) and inflammatory breast cancer (up to 5% of cases) are particularly aggressive and have relatively poor prognoses. Those with triple-negative breast cancer have an overall five-year survival rate of 77% – 91% for those whose tumors are confined to the breast; 12% for those with metastases. Those with inflammatory breast cancer are diagnosed after the cancer has already spread to the skin of the breast. They have an overall five-year survival rate of 39%; 19% for those with metastases. The relatively rare tumors with tubular, mucinous, or medullary growth tend to have better prognoses.
In addition to the factors that influence cancer staging, a person's age can also impact prognosis. Breast cancer before age 35 is rare, and is more likely to be associated with genetic predisposition to aggressive cancer. Conversely, breast cancer in those aged over 75 is associated with poorer prognosis.
Risk factors
Hormonal
Up to 80% of the variation in breast cancer frequency across countries is due to differences in reproductive history
that impact a woman's levels of female sex hormones (estrogens). Women who begin menstruating earlier (before age 12) or who undergo menopause later (after 51) are at increased risk of developing breast cancer. Women who give birth early in life are protected from breast cancer – someone who gives birth as a teenager has around a 70% lower risk of developing breast cancer than someone who does not have children. That protection wanes with higher maternal age at first birth, and disappears completely by age 35. Breastfeeding also reduces one's chance of developing breast cancer, with an approximately 4% reduction in breast cancer risk for every 12 months of breastfeeding experience. Those who lack functioning ovaries have reduced levels of estrogens, and therefore greatly reduced breast cancer risk.
Hormone replacement therapy for treatment of menopause symptoms can also increase a woman's risk of developing breast cancer, though the effect depends on the type and duration of therapy. Combined progesterone/estrogen therapy increases breast cancer risk – approximately doubling one's risk after 6–7 years of treatment (though the same therapy decreases the risk of colorectal cancer). Hormone treatment with estrogen alone has no effect on breast cancer risk, but increases one's risk of developing endometrial cancer, and therefore is only given to women who have undergone hysterectomies.
In the 1980s, the abortion–breast cancer hypothesis posited that induced abortion increased the risk of developing breast cancer. This hypothesis was the subject of extensive scientific inquiry, which concluded that neither miscarriages nor abortions are associated with a heightened risk for breast cancer.
The use of hormonal birth control does not cause breast cancer for most women; if it has an effect, it is small (on the order of 0.01% per user–year), temporary, and offset by the users' significantly reduced risk of ovarian and endometrial cancers. Among those with a family history of breast cancer, use of modern oral contraceptives does not appear to affect the risk of breast cancer.
Lifestyle
Drinking alcoholic beverages increases the risk of breast cancer, even among very light drinkers (women drinking less than half of one alcoholic drink per day). The risk is highest among heavy drinkers. Globally, about one in ten cases of breast cancer is caused by women drinking alcoholic beverages. Alcohol use is among the most common modifiable risk factors.
Obesity and diabetes increase the risk of breast cancer. A high body mass index (BMI) causes 7% of breast cancers while diabetes is responsible for 2%. At the same time the correlation between obesity and breast cancer is not at all linear. Studies show that those who rapidly gain weight in adulthood are at higher risk than those who have been overweight since childhood. Likewise, excess fat in the midriff seems to induce a higher risk than excess weight carried in the lower body. Dietary factors that may increase risk include a high-fat diet and obesity-related high cholesterol levels.
Dietary iodine deficiency may also play a role in the development of breast cancer.
Smoking tobacco appears to increase the risk of breast cancer, with the greater the amount smoked and the earlier in life that smoking began, the higher the risk. In those who are long-term smokers, the relative risk is increased by 35% to 50%.
A lack of physical activity has been linked to about 10% of cases. Sitting regularly for prolonged periods is associated with higher mortality from breast cancer. The risk is not negated by regular exercise, though it is lowered.
Actions to prevent breast cancer include not drinking alcoholic beverages, maintaining a healthy body composition, avoiding smoking and eating healthy food. Combining all of these (leading the healthiest possible lifestyle) would make almost a quarter of breast cancer cases worldwide preventable. The remaining three-quarters of breast cancer cases cannot be prevented through lifestyle changes.
Other risk factors include circadian disruptions related to shift-work and routine late-night eating. A number of chemicals have also been linked, including polychlorinated biphenyls, polycyclic aromatic hydrocarbons, and organic solvents. Although the radiation from mammography is a low dose, it is estimated that yearly screening from 40 to 80 years of age will cause approximately 225 cases of fatal breast cancer per million women screened.
Genetics
Around 10% of those with breast cancer have a family history of the disease or genetic factors that put them at higher risk. Women who have had a first-degree relative (mother or sister) diagnosed with breast cancer are at a 30–50% increased risk of being diagnosed with breast cancer themselves. In those with zero, one or two affected relatives, the risk of breast cancer before the age of 80 is 7.8%, 13.3%, and 21.1% with a subsequent mortality from the disease of 2.3%, 4.2%, and 7.6% respectively.
Women with certain genetic variants are at higher risk of developing breast cancer. The most well known are variants of the BRCA genes BRCA1 and BRCA2. Women with pathogenic variants in either gene have around a 70% chance of developing breast cancer in their lifetime, as well as an approximately 33% chance of developing ovarian cancer. Pathogenic variants in PALB2 – a gene whose product directly interacts with that of BRCA2 – also increase breast cancer risk; a woman with such a variant has around a 50% increased risk of developing breast cancer. Variants in other tumor suppressor genes can also increase one's risk of developing breast cancer, namely p53 (causes Li–Fraumeni syndrome), PTEN (causes Cowden syndrome), and PALB1.
Medical conditions
Breast changes like atypical ductal hyperplasia found in benign breast conditions such as fibrocystic breast changes, are correlated with an increased breast cancer risk.
Diabetes mellitus might also increase the risk of breast cancer. Autoimmune diseases such as lupus erythematosus seem also to increase the risk for the acquisition of breast cancer.
Women whose breasts have been exposed to substantial radiation doses before the age of 30 – typically due to repeated chest fluoroscopies or treatment for Hodgkin lymphoma – are at increased risk for developing breast cancer. Radioactive iodine therapy (used to treat thyroid disease) and radiation exposures after age 30 are not associated with breast cancer risk.
Pathophysiology
The major causes of sporadic breast cancer are associated with hormone levels. Breast cancer is promoted by estrogen. This hormone activates the development of breast throughout puberty, menstrual cycles and pregnancy. The imbalance between estrogen and progesterone during the menstrual phases causes cell proliferation. Moreover, oxidative metabolites of estrogen can increase DNA damage and mutations. Repeated cycling and the impairment of repair process can transform a normal cell into pre-malignant and eventually malignant cell through mutation. During the pre-malignant stage, high proliferation of stromal cells can be activated by estrogen to support the development of breast cancer. During the ligand binding activation, the ER can regulate gene expression by interacting with estrogen response elements within the promotor of specific genes. The expression and activation of ER due to lack of estrogen can be stimulated by extracellular signals. The ER directly binding with the several proteins, including growth factor receptors, can promote the expression of genes related to cell growth and survival.
Breast cancer, like other cancers, occurs because of an interaction between an environmental (external) factor and a genetically susceptible host. Normal cells divide as many times as needed, and stop. They attach to other cells and stay in place in tissues. Cells become cancerous when they lose their ability to stop dividing, to attach to other cells, to stay where they belong, and to die at the proper time.
Normal cells will self-destruct (programmed cell death) when they are no longer needed. Until then, cells are protected from programmed death by several protein clusters and pathways. One of the protective pathways is the PI3K/AKT pathway; another is the RAS/MEK/ERK pathway. Sometimes the genes along these protective pathways are mutated in a way that turns them permanently "on", rendering the cell incapable of self-destructing when it is no longer needed. This is one of the steps that causes cancer in combination with other mutations. Normally, the PTEN protein turns off the PI3K/AKT pathway when the cell is ready for programmed cell death. In some breast cancers, the gene for the PTEN protein is mutated, so the PI3K/AKT pathway is stuck in the "on" position, and the cancer cell does not self-destruct.
Mutations that can lead to breast cancer have been experimentally linked to estrogen exposure. Additionally, G-protein coupled estrogen receptors have been associated with various cancers of the female reproductive system including breast cancer.
Abnormal growth factor signaling in the interaction between stromal cells and epithelial cells can facilitate malignant cell growth. In breast adipose tissue, overexpression of leptin leads to increased cell proliferation and cancer.
Some mutations associated with cancer, such as p53, BRCA1 and BRCA2, occur in mechanisms to correct errors in DNA. The inherited mutation in BRCA1 or BRCA2 genes can interfere with repair of DNA crosslinks and double-strand breaks (known functions of the encoded protein). These carcinogens cause DNA damage such as DNA crosslinks and double-strand breaks that often require repairs by pathways containing BRCA1 and BRCA2.
GATA-3 directly controls the expression of estrogen receptor (ER) and other genes associated with epithelial differentiation, and the loss of GATA-3 leads to loss of differentiation and poor prognosis due to cancer cell invasion and metastasis.
Prevention
Lifestyle
Women can reduce their risk of breast cancer by maintaining a healthy weight, reducing alcohol use, increasing physical activity, and breastfeeding. These modifications might prevent 38% of breast cancers in the US, 42% in the UK, 28% in Brazil, and 20% in China. The benefits with moderate exercise such as brisk walking are seen at all age groups including postmenopausal women. High levels of physical activity reduce the risk of breast cancer by about 14%. Strategies that encourage regular physical activity and reduce obesity could also have other benefits, such as reduced risks of cardiovascular disease and diabetes. A study that included data from 130,957 women of European ancestry found "strong evidence that greater levels of physical activity and less sedentary time are likely to reduce breast cancer risk, with results generally consistent across breast cancer subtypes".
The American Cancer Society and the American Society of Clinical Oncology advised in 2016 that people should eat a diet high in vegetables, fruits, whole grains, and legumes. Eating foods rich in soluble fiber contributes to reducing breast cancer risk. High intake of citrus fruit has been associated with a 10% reduction in the risk of breast cancer. Marine omega-3 polyunsaturated fatty acids appear to reduce the risk. High consumption of soy-based foods may reduce risk.
Preventive surgery
Removal of the breasts before breast cancer develops (called preventive mastectomy) reduces the risk of developing breast cancer by more than 95%. In women genetically predisposed to developing breast cancer, preventive mastectomy reduces their risk of dying from breast cancer. For those at normal risk, preventive mastectomy does not reduce their chance of dying, and so is generally not recommended. Removing the second breast in a person who has breast cancer (contralateral risk-reducing mastectomy or CRRM) may reduce the risk of cancer in the second breast, but it is not clear whether removing the second breast improves the chance of survival. An increasing number of women who test positive for faulty BRCA1 or BRCA2 genes choose to have risk-reducing surgery. The average waiting time for undergoing the procedure is two years, which is much longer than recommended.
Medications
Selective estrogen receptor modulators (SERMs) reduce the risk of breast cancer but increase the risk of thromboembolism and endometrial cancer. There is no overall change in the risk of death. They are thus not recommended for the prevention of breast cancer in women at average risk but it is recommended they be offered for those at high risk and over the age of 35. The benefit of breast cancer reduction continues for at least five years after stopping a course of treatment with these medications. Aromatase inhibitors (such as exemestane and anastrozole) may be more effective than SERMs (such as tamoxifen) at reducing breast cancer risk and they are not associated with an increased risk of endometrial cancer and thromboembolism.
Epidemiology
Breast cancer is the most common invasive cancer in women in most countries, accounting for 30% of cancer cases in women. In 2022, an estimated 2.3 million women were diagnosed with breast cancer, and 670,000 died of the disease. The incidence of breast cancer is rising by around 3% per year, as populations in many countries are getting older.
Rates of breast cancer vary across the world, but generally correlate with wealth. Around 1 in 12 women are diagnosed with breast cancer in wealthier countries, compared to 1 in 27 in lower income countries. Most of that difference is due to differences in menstrual and reproductive histories – women in wealthier countries tend to begin menstruating earlier and have children later, both factors that increase risk of developing breast cancer. People in lower income countries tend to have less access to breast cancer screening and treatments, and so breast cancer death rates tend to be higher. 1 in 71 women die of breast cancer in wealthy countries, while 1 in 48 die of the disease in lower income countries.
Breast cancer predominantly affects women; less than 1% of those with breast cancer are men. Women can develop breast cancer as early as adolescence, but risk increases with age, and 75% of cases are in women over 50 years old. The risk over a woman's lifetime is approximately 1.5% at age 40, 3% at age 50, and more than 4% risk at age 70.
History
Because of its visibility, breast cancer was the form of cancer most often described in ancient documents. Because autopsies were rare, cancers of the internal organs were essentially invisible to ancient medicine. Breast cancer, however, could be felt through the skin, and in its advanced state often developed into fungating lesions: the tumor would become necrotic (die from the inside, causing the tumor to appear to break up) and ulcerate through the skin, weeping fetid, dark fluid.
The oldest discovered evidence of breast cancer is from Egypt and dates back 4200 years, to the Sixth Dynasty. The study of a woman's remains from the necropolis of Qubbet el-Hawa showed the typical destructive damage due to metastatic spread. The Edwin Smith Papyrus describes eight cases of tumors or ulcers of the breast that were treated by cauterization. The writing says about the disease, "There is no treatment." For centuries, physicians described similar cases in their practices, with the same conclusion. Ancient medicine, from the time of the Greeks through the 17th century, was based on humoralism, and thus believed that breast cancer was generally caused by imbalances in the fundamental fluids that controlled the body, especially an excess of black bile. Alternatively it was seen as divine punishment.
Mastectomy for breast cancer was performed at least as early as AD 548, when it was proposed by the court physician Aetios of Amida to Theodora. It was not until doctors achieved greater understanding of the circulatory system in the 17th century that they could link breast cancer's spread to the lymph nodes in the armpit. In the early 18th century the French surgeon Jean Louis Petit performed total mastectomies that included removing the axillary lymph nodes, as he recognized that this reduced recurrence. Petit's work built on the methods of the surgeon Bernard Peyrilhe, who in the 17th century additionally removed the pectoral muscle underlying the breast, as he judged that this greatly improved the prognosis. But poor results and the considerable risk to the patient meant that physicians did not share the opinion of surgeons such as Nicolaes Tulp, who in the 17th century proclaimed "the sole remedy is a timely operation." The eminent surgeon Richard Wiseman documented in the mid-17th century that following 12 mastectomies, two patients died during the operation, eight patients died shortly after the operation from progressive cancer and only two of the 12 patients were cured. Physicians were conservative in the treatment they prescribed in the early stages of breast cancer. Patients were treated with a mixture of detox purges, blood letting and traditional remedies that were supposed to lower acidity, such as the alkaline arsenic.
When in 1664 Anne of Austria was diagnosed with breast cancer, the initial treatment involved compresses saturated with hemlock juice. When the lumps increased the King's physician commenced a treatment with arsenic ointments. The royal patient died in 1666 in atrocious pain. Each failing treatment for breast cancer led to the search for new treatments, spurring a market in remedies that were advertised and sold by quacks, herbalists, chemists and apothecaries. The lack of anesthesia and antiseptics made mastectomy a painful and dangerous ordeal. In the 18th century, a wide variety of anatomical discoveries were accompanied by new theories about the cause and growth of breast cancer. The investigative surgeon John Hunter claimed that neural fluid generated breast cancer. Other surgeons proposed that milk within the mammary ducts led to cancerous growths. Theories about trauma to the breast as cause for malignant changes in breast tissue were advanced. The discovery of breast lumps and swellings fueled controversies about hard tumors and whether lumps were benign stages of cancer. Medical opinion about necessary immediate treatment varied. The surgeon Benjamin Bell advocated removal of the entire breast, even when only a portion was affected.
Breast cancer was uncommon until the 19th century, when improvements in sanitation and control of deadly infectious diseases resulted in dramatic increases in lifespan. Previously, most women had died too young to have developed breast cancer. In 1878, an article in Scientific American described historical treatment by pressure intended to induce local ischemia in cases when surgical removal were not possible. William Stewart Halsted started performing radical mastectomies in 1882, helped greatly by advances in general surgical technology, such as aseptic technique and anesthesia. The Halsted radical mastectomy often involved removing both breasts, associated lymph nodes, and the underlying chest muscles. This often led to long-term pain and disability, but was seen as necessary to prevent the cancer from recurring. Before the advent of the Halsted radical mastectomy, 20-year survival rates were only 10%; Halsted's surgery raised that rate to 50%.
Breast cancer staging systems were developed in the 1920s and 1930s to determining the extent to which a cancer has developed by growing and spreading. The first case-controlled study on breast cancer epidemiology was done by Janet Lane-Claypon, who published a comparative study in 1926 of 500 breast cancer cases and 500 controls of the same background and lifestyle for the British Ministry of Health. Radical mastectomies remained the standard of care in the USA until the 1970s, but in Europe, breast-sparing procedures, often followed by radiation therapy, were generally adopted in the 1950s. In 1955 George Crile Jr. published Cancer and Common Sense arguing that cancer patients needed to understand available treatment options. Crile became a close friend of the environmentalist Rachel Carson, who had undergone a Halsted radical mastectomy in 1960 to treat her malign breast cancer. The US oncologist Jerome Urban promoted super radical mastectomies, taking even more tissue, until 1963, when the ten-year survival rates proved equal to the less-damaging radical mastectomy. Carson died in 1964 and Crile went on to published a wide variety of articles, both in the popular press and in medical journals, challenging the widespread use of the Halsted radical mastectomy. In 1973 Crile published What Women Should Know About the Breast Cancer Controversy. When in 1974 Betty Ford was diagnosed with breast cancer, the options for treating breast cancer were openly discussed in the press. During the 1970s, a new understanding of metastasis led to perceiving cancer as a systemic illness as well as a localized one, and more sparing procedures were developed that proved equally effective.
In the 1980s and 1990s, thousands of women who had successfully completed standard treatment then demanded and received high-dose bone marrow transplants, thinking this would lead to better long-term survival. However, it proved completely ineffective, and 15–20% of women died because of the brutal treatment. The 1995 reports from the Nurses' Health Study and the 2002 conclusions of the Women's Health Initiative trial conclusively proved that HRT significantly increased the incidence of breast cancer.
Society and culture
Before the 20th century, breast cancer was feared and discussed in hushed tones, as if it were shameful. As little could be safely done with primitive surgical techniques, women tended to suffer silently rather than seeking care. When surgery advanced, and long-term survival rates improved, women began raising awareness of the disease and the possibility of successful treatment. The "Women's Field Army", run by the American Society for the Control of Cancer (later the American Cancer Society) during the 1930s and 1940s was one of the first organized campaigns. In 1952, the first peer-to-peer support group, called "Reach to Recovery", began providing post-mastectomy, in-hospital visits from women who had survived breast cancer.
The breast cancer movement of the 1980s and 1990s developed out of the larger feminist movements and women's health movement of the 20th century. This series of political and educational campaigns, partly inspired by the politically and socially effective AIDS awareness campaigns, resulted in the widespread acceptance of second opinions before surgery, less invasive surgical procedures, support groups, and other advances in care.
Pink ribbon
A pink ribbon is the most prominent symbol of breast cancer awareness. Pink ribbons, which can be made inexpensively, are sometimes sold as fundraisers, much like poppies on Remembrance Day. They may be worn to honor those who have been diagnosed with breast cancer, or to identify products that the manufacturer would like to sell to consumers that are interested in breast cancer. In the 1990s, breast cancer awareness campaigns were launched by US-based corporations. As part of these cause-related marketing campaigns, corporations donated to a variety of breast cancer initiatives for every pink ribbon product that was purchased. The Wall Street Journal noted that "the strong emotions provoked by breast cancer translate to a company's bottom line". While many US corporations donated to existing breast cancer initiatives, others such as Avon established their own breast cancer foundations on the back of pink ribbon products.
Wearing or displaying a pink ribbon has been criticized by the opponents of this practice as a kind of slacktivism, because it has no practical positive effect. It has also been criticized as hypocrisy, because some people wear the pink ribbon to show good will towards women with breast cancer, but then oppose these women's practical goals, like patient rights and anti-pollution legislation. Critics say that the feel-good nature of pink ribbons and pink consumption distracts society from the lack of progress on preventing and curing breast cancer. It is also criticized for reinforcing gender stereotypes and objectifying women and their breasts. Breast Cancer Action launched the "Think Before You Pink" campaign in 2002 against pinkwashing, to target businesses that have co-opted the pink campaign to promote products that cause breast cancer, such as alcoholic beverages.
Breast cancer culture
In her 2006 book Pink Ribbons, Inc.: Breast Cancer and the Politics of Philanthropy Samantha King claimed that breast cancer has been transformed from a serious disease and individual tragedy to a market-driven industry of survivorship and corporate sales pitch. In 2010 Gayle Sulik argued that the primary purposes or goals of breast cancer culture are to maintain breast cancer's dominance as the pre-eminent women's health issue, to promote the appearance that society is doing something effective about breast cancer, and to sustain and expand the social, political, and financial power of breast cancer activists. In the same year Barbara Ehrenreich published an opinion piece in Harper's Magazine, lamenting that in breast cancer culture, breast cancer therapy is viewed as a rite of passage rather than a disease. To fit into this mold, the woman with breast cancer needs to normalize and feminize her appearance, and minimize the disruption that her health issues cause anyone else. Anger, sadness, and negativity must be silenced. As with most cultural models, people who conform to the model are given social status, in this case as cancer survivors. Women who reject the model are shunned, punished and shamed. The culture is criticized for treating adult women like little girls, as evidenced by "baby" toys such as pink teddy bears given to adult women.
Emphasis
In 2009 the US science journalist Christie Aschwanden criticized that the emphasis on breast cancer screening may be harming women by subjecting them to unnecessary radiation, biopsies, and surgery. One-third of diagnosed breast cancers might recede on their own. Screening mammography efficiently finds non-life-threatening, asymptomatic breast cancers and precancers, even while overlooking serious cancers. According to the cancer researcher H. Gilbert Welch, screening mammography has taken the "brain-dead approach that says the best test is the one that finds the most cancers" rather than the one that finds dangerous cancers.
In 2002 it was noted that as a result of breast cancer's high visibility, the statistical results can be misinterpreted, such as the claim that one in eight women will be diagnosed with breast cancer during their lives – a claim that depends on the unrealistic assumption that no woman will die of any other disease before the age of 95. By 2010 the breast cancer survival rate in Europe was 91% at one years and 65% at five years. In the USA the five-year survival rate for localized breast cancer was 96.8%, while in cases of metastases it was only 20.6%. Because the prognosis for breast cancer was at this stage relatively favorable, compared to the prognosis for other cancers, breast cancer as cause of death among women was 13.9% of all cancer deaths. The second most common cause of death from cancer in women was lung cancer, the most common cancer worldwide for men and women. The improved survival rate made breast cancer the most prevalent cancer in the world. In 2010 an estimated 3.6 million women worldwide have had a breast cancer diagnosis in the past five years, while only 1.4 million male or female survivors from lung cancer were alive.
Health disparities in breast cancer
There are ethnic disparities in the mortality rates for breast cancer as well as in breast cancer treatment. Breast cancer is the most prevalent cancer affecting women of every ethnic group in the United States. Breast cancer incidence among Black women aged 45 and older is higher than that of white women in the same age group. White women aged 60–84 have higher incidence rates of breast cancer than Black women. Despite this, Black women at every age are more likely to succumb to breast cancer.
Breast cancer treatment has improved greatly over the years, but Black women are still less likely to obtain treatment compared to white women. Risk factors such as socioeconomic status, late-stage, or breast cancer at diagnosis, genetic differences in tumor subtypes, and differences in healthcare access all contribute to these disparities. Socioeconomic determinants affecting the disparity in breast cancer illness include poverty, culture, and social injustice. In Hispanic women, the incidence of breast cancer is lower than in non-Hispanic women, but is often diagnosed at a later stage than white women with larger tumors.
Black women are usually diagnosed with breast cancer at a younger age than white women. The median age of diagnosis for Black women is 59, in comparison to 62 in White women. The incidence of breast cancer in Black women has increased by 0.4% per year since 1975 and 1.5% per year among Asian/Pacific Islander women since 1992. Incidence rates were stable for non-Hispanic White, Hispanics, and Native American women. The five-year survival rate is noted to be 81% in Black women and 92% in White women. Chinese and Japanese women have the highest survival rates.
Disparities in breast cancer screenings
Low-income, immigrant, disabled, and racial and sexual minority women are less likely to undergo breast cancer screening and thus are more likely to receive late-stage diagnoses. Ensuring equitable health care, including breast cancer screenings, can positively affect these disparities.
Efforts to promote awareness about the significance of screenings, such as informational materials, are ineffective in reducing these disparities. Successful methods directly address the barriers that prevent access to screenings, such as language barriers or lack of health insurance.
Through community outreach in under-served communities, patient navigators and advocates can offer women personalized assistance with attending screening and follow-up appointments. However, the long-term benefits are unclear, primarily due to a lack of resources and staff to sustain these community-based solutions. Legislation that requires mandatory insurance coverage of language assistance and mammograms has also increased screening rates, particularly among ethnic minority communities. Innovative solutions proven effective include mobile screening vehicles, telehealth consultations, and online tools to assess potential risks and signs of breast cancer.
Disparities in breast cancer research
A diverse pool of participants in breast cancer research facilitates the investigation of the disease's unique risks and development patterns in ethnic minority populations. These populations experience better health outcomes from medical treatments designed based on research with diverse patient representation.
Within the United States, less than 3% of patients in clinical trials identify as Black, despite representing 12.7% of the national population. Hispanic and indigenous women are also significantly underrepresented in breast cancer research. Lengthy involvement in clinical trials without financial compensation discourages the participation of low-income women unable to miss work or afford traveling expenses. Monetary compensation, language interpreters, and patient navigators can increase the diversity of participants in research and clinical trials.
Special populations
Men
Breast cancer is relatively uncommon in men, but it can occur. Typically, a breast tumor appears as a lump in the breast. Men who develop gynecomastia (enlargement of the breast tissue due to hormone imbalance) are at increased risk, as are men with disease-associated variations in the BRCA2 gene, high exposure to estrogens, or men with Klinefelter syndrome (who have two copies of the X chromosome, and naturally high estrogen levels). Treatment typically involves surgery, followed by radiation if needed. Around 90% men's tumors are ER-positive, and are treated with endocrine therapy, typically tamoxifen. The disease course and prognosis is similar to that in women of similar age with similar disease characteristics.
Pregnant women
Diagnosing breast cancer in pregnant women is often delayed as symptoms can be masked by pregnancy-related breast changes. The diagnostic path is the same as in non-pregnant women, except that radiography of the abdomen is avoided. Chemotherapy is avoided during the first trimester, but can be safely administered through the rest of the pregnancy term. anti-HER2 treatments and endocrine therapies are delayed until after delivery. These treatments given after delivery can cross into the breast milk, and so breast feeding is generally not possible. The prognosis for pregnant women with breast cancer is similar to non-pregnant women of similar age.
Research
Treatments are being evaluated in clinical trials. This includes individual drugs, combinations of drugs, and surgical and radiation techniques Investigations include new types of targeted therapy, cancer vaccines, oncolytic virotherapy, gene therapy and immunotherapy.
The latest research is reported annually at scientific meetings such as that of the American Society of Clinical Oncology, San Antonio Breast Cancer Symposium, and the St. Gallen Oncology Conference in St. Gallen, Switzerland. These studies are reviewed by professional societies and other organizations, and formulated into guidelines for specific treatment groups and risk category.
Fenretinide, a retinoid, is also being studied as a way to reduce the risk of breast cancer. In particular, combinations of ribociclib plus endocrine therapy have been the subject of clinical trials.
A 2019 review found moderate certainty evidence that giving people antibiotics before breast cancer surgery helped to prevent surgical site infection (SSI). Further study is required to determine the most effective antibiotic protocol and use in women undergoing immediate breast reconstruction.
Cryoablation
As of 2014 cryoablation is being studied to see if it could be a substitute for a lumpectomy in small cancers. There is tentative evidence in those with tumors less than 2 centimeters across. It may also be used in those in who surgery is not possible. Another review states that cryoablation looks promising for early breast cancer of small size.
Breast cancer cell lines
Part of the current knowledge on breast carcinomas is based on in vivo and in vitro studies performed with cell lines derived from breast cancers. These provide an unlimited source of homogenous self-replicating material, free of contaminating stromal cells, and often easily cultured in simple standard media. The first breast cancer cell line described, BT-20, was established in 1958. Since then, and despite sustained work in this area, the number of permanent lines obtained has been strikingly low (about 100). Indeed, attempts to culture breast cancer cell lines from primary tumors have been largely unsuccessful. This poor efficiency was often due to technical difficulties associated with the extraction of viable tumor cells from their surrounding stroma. Most of the available breast cancer cell lines issued from metastatic tumors, mainly from pleural effusions. Effusions provided generally large numbers of dissociated, viable tumor cells with little or no contamination by fibroblasts and other tumor stroma cells. Many of the currently used BCC lines were established in the late 1970s. A very few of them, namely MCF-7, T-47D, MDA-MB-231 and SK-BR-3, account for more than two-thirds of all abstracts reporting studies on mentioned breast cancer cell lines, as concluded from a Medline-based survey.
Molecular markers
Metabolic markers
Clinically, the most useful metabolic markers in breast cancer are the estrogen and progesterone receptors that are used to predict response to hormone therapy. New or potentially new markers for breast cancer include BRCA1 and BRCA2 to identify people at high risk of developing breast cancer, HER-2, and SCD1, for predicting response to therapeutic regimens, and urokinase plasminogen activator, PA1-1 and SCD1 for assessing prognosis.
Artificial intelligence
The integration of artificial intelligence (AI) in breast cancer diagnosis and management has the potential to improve healthcare practices and enhance patient care. With the adoption of advanced technologies like surgical robots, healthcare providers are able to achieve greater accuracy and efficiency in surgeries related to breast diseases. AI can be used to predict breast cancer risk.
These AI-driven robots use algorithms to provide real-time guidance, analyze imaging data, and execute procedures with precision, ultimately leading to improved surgical outcomes for people with breast cancer. Moreover, AI has the potential to transform the methods of monitoring and personalized treatment, using remote monitoring systems to facilitate continuous observation of a person's health status, assist early detection of disease progression, and enable individualized treatment options. The overall impact of these technological advancements enhances quality of care, promoting more interactive and personalized healthcare solutions.
Other animals
Mammary tumor for breast cancer in other animals
Mouse models of breast cancer metastasis
| Biology and health sciences | Cancer | null |
70565 | https://en.wikipedia.org/wiki/Itaipu%20Dam | Itaipu Dam | The Itaipu Dam ( ; ; ) is a hydroelectric dam on the Paraná River located on the border between Brazil and Paraguay. It is the third largest hydroelectric dam in the world, and holds the 45th largest reservoir in the world.
The name "Itaipu" was taken from an isle that existed near the construction site. In the Guarani language, means "the sounding stone". The Itaipu Dam's hydroelectric power plant produced the second-most electricity of any in the world as of 2020, only surpassed by the Three Gorges Dam plant in China in electricity production.
Completed in 1984, it is a binational undertaking run by Brazil and Paraguay at the border between the two countries, north of the Friendship Bridge. The project ranges from Foz do Iguaçu, in Brazil, and Ciudad del Este in Paraguay, in the south to Guaíra and Salto del Guairá in the north. The installed generation capacity of the plant is 14 GW, with 20 generating units providing 700 MW each with a hydraulic design head of . In 2016, the plant employed 3038 workers.
Of the twenty generator units currently installed, ten generate at 50 Hz for Paraguay and ten generate at 60 Hz for Brazil. Since the output capacity of the Paraguayan generators far exceeds the load in Paraguay, most of their production is exported directly to the Brazilian side, from where two 600 kV HVDC lines, each approximately long, carry the majority of the energy to the São Paulo/Rio de Janeiro region where the terminal equipment converts the power to 60 Hz.
History
Negotiations between Brazil and Paraguay
The concept behind the Itaipu Power Plant was the result of serious negotiations between the two countries during the 1960s. The "Ata do Iguaçu" (Iguaçu Act) was signed on July 22, 1966, by the Brazilian and Paraguayan Ministers of Foreign Affairs, Juracy Magalhães and Raúl Sapena Pastor. This was a joint declaration of the mutual interest in studying the exploitation of the hydro resources that the two countries shared in the section of the Paraná River starting from, and including, the Salto de Sete Quedas, to the Iguaçu River watershed. The treaty that gave origin to the power plant was signed in 1973.
The terms of the treaty, which expired in 2023, have been the subject of widespread discontent in Paraguay. The government of President Lugo vowed to renegotiate the terms of the treaty with Brazil, which long remained hostile to any renegotiation.
In 2009, Brazil agreed to a fairer payment of electricity to Paraguay and also allowed Paraguay to sell excess power directly to Brazilian companies instead of solely through the Brazilian electricity monopoly.
Construction starts
In 1970, the consortium formed by the companies ELC Electroconsult S.p.A. (from Italy) and IECO (from the United States) won the international competition for the realization of the viability studies and for the elaboration of the construction project. Design studies began in February 1971. On April 26, 1973, Brazil and Paraguay signed the Itaipu Treaty, the legal instrument for the hydroelectric exploitation of the Paraná River by the two countries. On May 17, 1974, the Itaipu Binacional entity was created to administer the plant's construction. The construction began in January of the following year. Brazil's (and Latin America's) first electric car was introduced in late 1974; it received the name Itaipu in honor of the project.
Paraná River rerouted
On October 14, 1978, the Paraná River had its route changed, which allowed a section of the riverbed to dry so the dam could be built there.
Agreement by Brazil, Paraguay, and Argentina
The construction of the dam was first contested by Argentina, but the negotiations and resolution of the dispute ended up setting the basis for Argentine–Brazilian integration later on.
An important diplomatic settlement was reached with the signing of the Acordo Tripartite by Brazil, Paraguay and Argentina, on October 19, 1979. This agreement established the allowed river levels and how much they could change as a result of the various hydroelectrical undertakings in the watershed that was shared by the three countries.
Formation of the lake
The reservoir began its formation on October 13, 1982, when the dam works were completed and the side canal's gates were closed. Throughout this period, heavy rains and flooding accelerated the filling of the reservoir as the water rose and reached the gates of the spillway on October 27.
Start of operations
On May 5, 1984, the first generation unit started running in Itaipu. The first 18 units were installed at the rate of two to three a year; the last two of these started running in the year 1991.
Capacity expansion in 2007
The last two of the 20 electric generation units started operations in September 2006 and in March 2007, thus raising the installed capacity to 14 GW and completing the power plant. This increase in capacity allows 18 generation units to run permanently while two are shut down for maintenance. Due to a clause in the treaty signed between Brazil, Paraguay and Argentina, the maximum number of generating units allowed to operate simultaneously cannot exceed 18 (see the agreement section for more information).
The rated nominal power of each generating unit (turbine and generator) is 700 MW. However, because the head (difference between reservoir level and the river level at the bottom of the dam) that actually occurs is higher than the designed head (), the power available exceeds 750 MW half of the time for each generator.
Each turbine generates around 700 MW; by comparison, all the water from the Iguaçu Falls would have the capacity to feed only two generators.
November 2009 power failure
On November 10, 2009, transmission from the plant was completely disrupted, possibly due to a storm damaging up to three high-voltage transmission lines. Itaipu itself was not damaged. This caused massive power outages in Brazil and Paraguay, blacking out the entire country of Paraguay for 15 minutes, and plunging Rio de Janeiro and São Paulo into darkness for more than 2 hours. 50 million people were reportedly affected. The blackout occurred at 22:13 local time. It affected the southeast of Brazil most severely, leaving São Paulo, Rio de Janeiro and Espírito Santo completely without electricity. Blackouts also swept through the interior of Rio Grande do Sul, Santa Catarina, Mato Grosso do Sul, Mato Grosso, the interior of Bahia and parts of Pernambuco, energy officials said. By 00:30 power had been restored to most areas.
Wonder of the Modern World
In 1994, the American Society of Civil Engineers elected the Itaipu Dam as one of the seven modern Wonders of the World. In 1995, the American magazine Popular Mechanics published the results.
Social and environmental impacts
When construction of the dam began, approximately 10,000 families living beside the Paraná River were displaced because of construction.
The world's largest waterfall by volume, the Guaíra Falls, was inundated by the newly formed Itaipu reservoir. The Brazilian government later liquidated the Guaíra Falls National Park. A few months before the reservoir was filled, 80 people died when an overcrowded bridge overlooking the falls collapsed, as tourists sought a last glimpse of the falls.
The Guaíra Falls was an effective barrier that separated freshwater species in the upper Paraná basin (with its many endemics) from species found below it, and the two are recognized as different ecoregions. After the falls disappeared, many species formerly restricted to one of these areas have been able to invade the other, causing problems typically associated with introduced species. For example, more than 30 fish species that formerly were restricted to the region below the falls have been able to invade the region above.
The American composer Philip Glass has written a symphonic cantata named Itaipu, in honour of the structure.
The Santa Maria Ecological Corridor now connects the Iguaçu National Park with the protected margins of Lake Itaipu, and via these margins with the Ilha Grande National Park.
Statistics
Construction
The course of the seventh biggest river in the world was shifted, as were 50 million tonnes of earth and rock.
The amount of concrete used to build the Itaipu Power Plant would be enough to build 210 football stadiums the size of the Estádio do Maracanã.
The iron and steel used would allow for the construction of 380 Eiffel Towers.
The volume of excavation of earth and rock in Itaipu is 8.5 times greater than that of the Channel Tunnel and the volume of concrete is 15 times greater.
Around forty thousand people worked in the construction.
Itaipu is one of the most expensive objects ever built.
Generating station and dam
The total length of the dam is . The crest elevation is . Itaipu is actually four dams joined together – from the far left, an earth fill dam, a rock fill dam, a concrete buttress main dam, and a concrete wing dam to the right.
The spillway has a length of .
The maximum flow of Itaipu's fourteen segmented spillways is , into three skislope formed canals. It is equivalent to 40 times the average flow of the nearby natural Iguaçu Falls.
The flow of two generators ( each) is roughly equivalent to the average flow of the Iguaçu Falls ().
The dam is high, equivalent to a 65-story building.
Though it is the seventh largest reservoir in size in Brazil, the Itaipu's reservoir has the highest ratio of electricity production to flooded area. For the 14,000 MW installed power, were flooded. The reservoirs for the hydroelectric power plants of Sobradinho Dam, Tucuruí Dam, Porto Primavera Dam, Balbina Dam, Serra da Mesa Dam and Furnas Dam are all larger than the one for Itaipu, but have a smaller installed generating capacity. The one with the next largest hydroelectric production, Tucuruí, has an installed capacity of 8,000 MW, while flooding of land.
Electricity is 55% cheaper when made by the Itaipu Dam than the other types of power plants in the area.
Generation
Although its designed peak generating capacity is only 14,000 MW, behind the 22,500 MW Three Gorges Dam, the dam formerly held the record for energy production with 101.6 TWh produced in 2016. This record was beaten in 2020, when the Three Gorges Dam produced a new record 111.8 TWh after extensive monsoon rainfall that year.
In the period 2012–2021, the Itaipu Dam maintained the second highest average annual hydroelectric production in the world averaging 89.22 TWh per year, second to the 97.22 TWh per year average of the Three Gorges Dam in that period.
| Technology | Hydraulic infrastructure | null |
70576 | https://en.wikipedia.org/wiki/Orthoptera | Orthoptera | Orthoptera () is an order of insects that comprises the grasshoppers, locusts, and crickets, including closely related insects, such as the bush crickets or katydids and wētā. The order is subdivided into two suborders: Caelifera – grasshoppers, locusts, and close relatives; and Ensifera – crickets and close relatives.
More than 20,000 species are distributed worldwide. The insects in the order have incomplete metamorphosis, and produce sound (known as a "stridulation") by rubbing their wings against each other or their legs, the wings or legs containing rows of corrugated bumps. The tympanum, or ear, is located in the front tibia in crickets, mole crickets, and bush crickets or katydids, and on the first abdominal segment in the grasshoppers and locusts. These organisms use vibrations to locate other individuals.
Grasshoppers and other orthopterans are able to fold their wings (i.e. they are members of Neoptera).
Etymology
The name is derived from the Greek meaning "straight" and meaning "wing".
Characteristics
Orthopterans have a generally cylindrical body, with elongated hindlegs and musculature adapted for jumping. They have mandibulate mouthparts for biting and chewing and large compound eyes, and may or may not have ocelli, depending on the species. The antennae have multiple joints and filiform type, and are of variable length.
The first and third segments on the thorax are larger, while the second segment is much smaller. They have two pairs of wings, which are held overlapping the abdomen at rest. The forewings, or tegmina, are narrower than the hindwings and hardened at the base, while the hindwings are membranous, with straight veins and numerous cross-veins. At rest, the hindwings are held folded fan-like under the forewings. The final two to three segments of the abdomen are reduced, and have single-segmented cerci.
Life cycle
Orthopterans have a paurometabolous lifecycle or incomplete metamorphosis. The use of sound is generally crucial in courtship, and most species have distinct songs. Most grasshoppers lay their eggs in the ground or on vegetation. The eggs hatch and the young nymphs resemble adults, but lack wings and at this stage are often called 'hoppers'. They may often also have a radically different coloration from the adults. Through successive moults, the nymphs develop wings until their final moult into a mature adult with fully developed wings.
The number of moults varies between species; growth is also very variable and may take a few weeks to some months depending on food availability and weather conditions.
Evolution
This order evolved with a division into two suborders – Caelifera and Ensifera – occurring .
Phylogeny
The Orthoptera are divided into two suborders, Caelifera and Ensifera, that have been shown to be monophyletic. A recent comprehensive phylogeny based on analyses of data from transcriptomes and mitochondrial genomes found the following relationships within Orthoptera.
| Biology and health sciences | Orthoptera | null |
70649 | https://en.wikipedia.org/wiki/Hummingbird | Hummingbird | Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.
Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring in length. The smallest is the bee hummingbird, which weighs less than , and the largest is the giant hummingbird, weighing . Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.
They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.
Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about .
Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.
Description
Hummingbirds are the smallest known and smallest living avian theropod dinosaurs. The iridescent colors and highly specialized feathers of many species (mainly in males) give some hummingbirds exotic common names, such as sun gem, fairy, woodstar, sapphire or sylph.
Morphology
Across the estimated 366 species, hummingbird weights range from as small as to as large as . They have characteristic long, narrow beaks (bills) which may be straight (of varying lengths) or highly curved. The bee hummingbird only long and weighing about is the world's smallest bird and smallest warm-blooded vertebrate.
Hummingbirds have compact bodies with relatively long, bladelike wings having anatomical structure enabling helicopter-like flight in any direction, including the ability to hover. Particularly while hovering, the wing beats produce the humming sounds, which function to alert other birds. In some species, the tail feathers produce sounds used by males during courtship flying. Hummingbirds have extremely rapid wing-beats as high as 80 per second, supported by a high metabolic rate dependent on foraging for sugars from flower nectar.
Hummingbird legs are short with feet having three toes pointing forward and one backward the hallux. The toes of hummingbirds are formed as claws with ridged inner surfaces to aid gripping onto flower stems or petals. Hummingbirds do not walk on the ground or hop like most birds, but rather shuffle laterally and use their feet to grip while perching, preening feathers, or nest-building (by females), and during fights to grab feathers of opponents.
Hummingbirds apply their legs as pistons for generating thrust upon taking flight, although the shortness of their legs provides about 20% less propulsion than assessed in other birds. During flight, hummingbird feet are tucked up under the body, enabling optimal aerodynamics and maneuverability.
Of those species that have been measured during flight, the top flight speeds of hummingbirds exceed . During courtship, some male species dive from of height above a female at speeds around .
The sexes differ in feather coloration, with males having distinct brilliance and ornamentation of head, neck, wing, and breast feathers. The most typical feather ornament in males is the gorget a bib-like iridescent neck-feather patch that changes brilliance with the viewing angle to attract females and warn male competitors away from territory.
Life cycle
Hummingbirds begin mating when they are a year old. Sex occurs over 3–5 seconds when the male joins its cloaca with the female's, passing sperm to fertilize the female's eggs.
Hummingbird females build a nest resembling a small cup about in diameter, commonly attached to a tree branch using spider webs, lichens, moss, and loose strings of plant fibers (image). Typically, two pea-shaped white eggs (image) the smallest of any bird are incubated over 2–3 weeks in breeding season. Fed by regurgitation only from the mother, the chicks fledge about 3 weeks after hatching.
The average lifespan of a ruby-throated hummingbird is estimated to be 3–5 years, with most deaths occurring in yearlings, although one banded ruby-throated hummingbird lived for 9 years and 2 months. Bee hummingbirds live 7–10 years.
Population estimates and threatened species
Although most hummingbird species live in remote habitats where their population numbers are difficult to assess, population studies in the United States and Canada indicate that the ruby-throated hummingbird numbers are around 34 million, rufous hummingbirds are around 19 million, black-chinned, Anna's, and broad-tailed hummingbirds are about 8 million each, calliopes at 4 million, and Costa's and Allen's hummingbirds are around 2 million each. Several species exist only in the thousands or hundreds.
According to the International Union for Conservation of Nature Red List of Threatened Species in 2024, 8 hummingbird species are classified as critically endangered, 13 are endangered, 13 are vulnerable, and 20 species are near-threatened. Two species the Brace's emerald (Riccordia bracei) and Caribbean emerald (Riccordia elegans) have been declared extinct.
Of the 15 species of North American hummingbirds that inhabit the United States and Canada, several have changed their range of distribution, while others showed declines in numbers since the 1970s, including in 2023 with dozens of hummingbird species in decline. As of the 21st century, rufous, Costa's, calliope, broad-tailed, and Allen's hummingbirds are in significant decline, some losing as much as 67% of their numbers since 1970 at nearly double the rate of population loss over the previous 50 years. The ruby-throated hummingbird population the most populous North American hummingbird decreased by 17% over the early 21st century. Habitat loss, glass collisions, cat predation, pesticides, and possibly climate change affecting food availability, migration signals, and breeding are factors that may contribute to declining hummingbird numbers. By contrast, Anna's hummingbirds had large population growth at an accelerating rate since 2010, and expanded their range northward to reside year-round in cold winter climates.
Superficially similar species
Some species of sunbirds an Old World group restricted in distribution to Eurasia, Africa, and Australia resemble hummingbirds in appearance and behavior, but are not related to hummingbirds, as their resemblance is due to convergent evolution.
The hummingbird moth has flying and feeding characteristics similar to those of a hummingbird. Hummingbirds may be mistaken for hummingbird hawk-moths, which are large, flying insects with hovering capabilities, and exist only in Eurasia.
Range
Hummingbirds are restricted to the Americas from south central Alaska to Tierra del Fuego, including the Caribbean. The majority of species occur in tropical and subtropical Central and South America, but several species also breed in temperate climates and some hillstars occur even in alpine Andean highlands at altitudes up to .
The greatest species richness is in humid tropical and subtropical forests of the northern Andes and adjacent foothills, but the number of species found in the Atlantic Forest, Central America or southern Mexico also far exceeds the number found in southern South America, the Caribbean islands, the United States, and Canada. While fewer than 25 different species of hummingbirds have been recorded from the United States and fewer than 10 from Canada and Chile each, Colombia alone has more than 160 and the comparably small Ecuador has about 130 species.
Taxonomy and systematics
The family Trochilidae was introduced in 1825 by Irish zoologist Nicholas Aylward Vigors with Trochilus as the type genus.
In traditional taxonomy, hummingbirds are placed in the order Apodiformes, which also contains the swifts, but some taxonomists have separated them into their own order, the Trochiliformes. Hummingbirds' wing bones are hollow and fragile, making fossilization difficult and leaving their evolutionary history poorly documented. Though scientists theorize that hummingbirds originated in South America, where species diversity is greatest, possible ancestors of extant hummingbirds may have lived in parts of Europe and what is southern Russia today.
As of 2023, 366 hummingbird species have been identified. They have been traditionally divided into two subfamilies: the hermits (subfamily Phaethornithinae) and the typical hummingbirds (subfamily Trochilinae, all the others). Molecular phylogenetic studies have shown, though, that the hermits are sister to the topazes, making the former definition of the Trochilinae not monophyletic. The hummingbirds form nine major clades: the topazes and jacobins, the hermits, the mangoes, the coquettes, the brilliants, the giant hummingbird (Patagona gigas), the mountaingems, the bees, and the emeralds. The topazes and jacobins combined have the oldest split with the rest of the hummingbirds. The hummingbird family has the third-greatest number of species of any bird family (after the tyrant flycatchers and the tanagers).
Fossil hummingbirds are known from the Pleistocene of Brazil and the Bahamas, but neither has yet been scientifically described, and fossils and subfossils of a few extant species are known. Until recently, older fossils had not been securely identifiable as those of hummingbirds.
In 2004, Gerald Mayr identified two 30-million-year-old hummingbird fossils. The fossils of this primitive hummingbird species, named Eurotrochilus inexpectatus ("unexpected European hummingbird"), had been sitting in a museum drawer in Stuttgart; they had been unearthed in a clay pit at Wiesloch–Frauenweiler, south of Heidelberg, Germany, and, because hummingbirds were assumed to have never occurred outside the Americas, were not recognized to be hummingbirds until Mayr took a closer look at them.
Fossils of birds not clearly assignable to either hummingbirds or a related extinct family, the Jungornithidae, have been found at the Messel pit and in the Caucasus, dating from 35 to 40 million years ago; this indicates that the split between these two lineages indeed occurred around that time. The areas where these early fossils have been found had a climate quite similar to that of the northern Caribbean or southernmost China during that time. The biggest remaining mystery at present is what happened to hummingbirds in the roughly 25 million years between the primitive Eurotrochilus and the modern fossils. The astounding morphological adaptations, the decrease in size, and the dispersal to the Americas and extinction in Eurasia all occurred during this timespan. DNA–DNA hybridization results suggest that the main radiation of South American hummingbirds took place at least partly in the Miocene, some 12 to 13 million years ago, during the uplifting of the northern Andes.
In 2013, a 50-million-year-old bird fossil unearthed in Wyoming was found to be a predecessor to hummingbirds and swifts before the groups diverged.
Evolution
Hummingbirds split from other members of Apodiformes, the insectivorous swifts (family Apodidae) and treeswifts (family Hemiprocnidae), about 42 million years ago, probably in Eurasia. Despite their current New World distribution, the earliest species of hummingbird occurred in the early Oligocene (Rupelian about 34–28 million years ago) of Europe, belonging to the genus Eurotrochilus, having similar morphology to modern hummingbirds.
Phylogeny
A phylogenetic tree unequivocally indicates that modern hummingbirds originated in South America, with the last common ancestor of all living hummingbirds living around 22 million years ago.
A map of the hummingbird family tree – reconstructed from analysis of 284 species – shows rapid diversification from 22 million years ago. Hummingbirds fall into nine main clades – the topazes, hermits, mangoes, brilliants, coquettes, the giant hummingbird, mountaingems, bees, and emeralds – defining their relationship to nectar-bearing flowering plants which attract hummingbirds into new geographic areas.
Molecular phylogenetic studies of the hummingbirds have shown that the family is composed of nine major clades. When Edward Dickinson and James Van Remsen Jr. updated the Howard and Moore Complete Checklist of the Birds of the World for the 4th edition in 2013, they divided the hummingbirds into six subfamilies.
Molecular phylogenetic studies determined the relationships between the major groups of hummingbirds. In the cladogram below, the English names are those introduced in 1997. The scientific names are those introduced in 2013.
While all hummingbirds depend on flower nectar to fuel their high metabolisms and hovering flight, coordinated changes in flower and bill shape stimulated the formation of new species of hummingbirds and plants. Due to this exceptional evolutionary pattern, as many as 140 hummingbird species can coexist in a specific region, such as the Andes range.
The hummingbird evolutionary tree shows that one key evolutionary factor appears to have been an altered taste receptor that enabled hummingbirds to seek nectar.
Upon maturity, males of a particular species, Phaethornis longirostris, the long-billed hermit, appear to be evolving a dagger-like weapon on the beak tip as a secondary sexual trait to defend mating areas.
Geographic diversification
The Andes Mountains appear to be a particularly rich environment for hummingbird evolution because diversification occurred simultaneously with mountain uplift over the past 10 million years. Hummingbirds remain in dynamic diversification inhabiting ecological regions across South America, North America, and the Caribbean, indicating an enlarging evolutionary radiation.
Within the same geographic region, hummingbird clades coevolved with nectar-bearing plant clades, affecting mechanisms of pollination. The same is true for the sword-billed hummingbird (Ensifera ensifera), one of the morphologically most extreme species, and one of its main food plant clades (Passiflora section Tacsonia).
Coevolution with ornithophilous flowers
Hummingbirds are specialized nectarivores tied to the ornithophilous flowers upon which they feed. This coevolution implies that morphological traits of hummingbirds, such as bill length, bill curvature, and body mass, are correlated with morphological traits of plants, such as corolla length, curvature, and volume. Some species, especially those with unusual bill shapes, such as the sword-billed hummingbird and the sicklebills, are coevolved with a small number of flower species. Even in the most specialized hummingbird–plant mutualisms, the number of food plant lineages of the individual hummingbird species increases with time. The bee hummingbird (Mellisuga helenae) – the world's smallest bird – evolved to dwarfism likely because it had to compete with long-billed hummingbirds having an advantage for nectar foraging from specialized flowers, consequently leading the bee hummingbird to more successfully compete for flower foraging against insects.
Many plants pollinated by hummingbirds produce flowers in shades of red, orange, and bright pink, although the birds take nectar from flowers of other colors. Hummingbirds can see wavelengths into the near-ultraviolet, but hummingbird-pollinated flowers do not reflect these wavelengths as many insect-pollinated flowers do. This narrow color spectrum may render hummingbird-pollinated flowers relatively inconspicuous to most insects, thereby reducing nectar robbing. Hummingbird-pollinated flowers also produce relatively weak nectar (averaging 25% sugars) containing a high proportion of sucrose, whereas insect-pollinated flowers typically produce more concentrated nectars dominated by fructose and glucose.
Hummingbirds and the plants they visit for nectar have a tight coevolutionary association, generally called a plant–bird mutualistic network. These birds show high specialization and modularity, especially in communities with high species richness. These associations are also observed when closely related hummingbirds, such as two species of the same genus, visit distinct sets of flowering species.
Sexual dimorphisms
Hummingbirds exhibit sexual size dimorphism according to Rensch's rule, in which males are smaller than females in small-bodied species, and males are larger than females in large-bodied species. The extent of this sexual size difference varies among clades of hummingbirds. For example, the Mellisugini clade (bees) exhibits a large size dimorphism, with females being larger than males. Conversely, the Lesbiini clade (coquettes) displays very little size dimorphism; males and females are similar in size.
Sexual dimorphisms in bill size and shape are also present between male and female hummingbirds, where in many clades, females have longer, more curved bills favored for accessing nectar from tall flowers. For males and females of the same size, females tend to have larger bills.
Sexual size and bill differences likely evolved due to constraints imposed by courtship, because mating displays of male hummingbirds require complex aerial maneuvers. Males tend to be smaller than females, allowing conservation of energy to forage competitively and participate more frequently in courtship. Thus, sexual selection favors smaller male hummingbirds.
Female hummingbirds tend to be larger, requiring more energy, with longer beaks that allow for more effective reach into crevices of tall flowers for nectar. Thus, females are better at foraging, acquiring flower nectar, and supporting the energy demands of their larger body size. Directional selection thus favors the larger hummingbirds in terms of acquiring food.
Another evolutionary cause of this sexual bill dimorphism is that the selective forces from competition for nectar between the sexes of each species drives sexual dimorphism. Depending on which sex holds territory in the species, the other sex having a longer bill and being able to feed on a wide variety of flowers is advantageous, decreasing intraspecific competition. For example, in species of hummingbirds where males have longer bills, males do not hold a specific territory and have a lek mating system. In species where males have shorter bills than females, males defend their resources, so females benefit from a longer bill to feed from a broader range of flowers.
Feather colors
The hummingbird plumage coloration gamut, particularly for blue, green, and purple colors in the gorget and crown of males, occupies 34% of the total color space for bird feathers. White (unpigmented) feathers have the lowest incidence in the hummingbird color gamut. Hummingbird plumage color diversity evolved from sexual and social selection on plumage coloration, which correlates with the rate of hummingbird species development over millions of years. Bright plumage colors in males are part of aggressive competition for flower resources and mating. The bright colors result from pigmentation in the feathers and from prismal cells within the top layers of feathers of the head, gorget, breast, back and wings. When sunlight hits these cells, it is split into wavelengths that reflect to the observer in varying degrees of intensity, with the feather structure acting as a diffraction grating. Iridescent hummingbird colors result from a combination of refraction and pigmentation, since the diffraction structures themselves are made of melanin, a pigment, and may also be colored by carotenoid pigmentation and more subdued black, brown or gray colors dependent on melanin.
By merely shifting position, feather regions of a muted-looking bird can instantly become fiery red or vivid green. In courtship displays for one example, males of the colorful Anna's hummingbird orient their bodies and feathers toward the sun to enhance the display value of iridescent plumage toward a female of interest.
One study of Anna's hummingbirds found that dietary protein was an influential factor in feather color, as birds receiving more protein grew significantly more colorful crown feathers than those fed a low-protein diet. Additionally, birds on a high-protein diet grew yellower (higher hue) green tail feathers than birds on a low-protein diet.
Specialized characteristics and metabolism
Humming
Hummingbirds are named for the prominent humming sound their wingbeats make while flying and hovering to feed or interact with other hummingbirds. Humming serves communication purposes by alerting other birds of the arrival of a fellow forager or potential mate. The humming sound derives from aerodynamic forces generated by the downstrokes and upstrokes of the rapid wingbeats, causing oscillations and harmonics that evoke an acoustic quality likened to that of a musical instrument. The humming sound of hummingbirds is unique among flying animals, compared to the whine of mosquitoes, buzz of bees, and "whoosh" of larger birds.
The wingbeats causing the hum of hummingbirds during hovering are achieved by elastic recoil of wing strokes produced by the main flight muscles: the pectoralis major (the main downstroke muscle) and supracoracoideus (the main upstroke muscle).
Vision
Although hummingbird eyes are small in diameter (5–6 mm), they are accommodated in the skull by reduced skull ossification, and occupy a larger proportion of the skull compared to other birds and animals.
Further, hummingbird eyes have large corneas, which comprise about 50% of the total transverse eye diameter, combined with an extraordinary density of retinal ganglion cells responsible for visual processing, containing some 45,000 neurons per mm2. The enlarged cornea relative to total eye diameter serves to increase the amount of light perception by the eye when the pupil is dilated maximally, enabling nocturnal flight.
During evolution, hummingbirds adapted to the navigational needs of visual processing while in rapid flight or hovering by development of the exceptionally dense array of retinal neurons, allowing for increased spatial resolution in the lateral and frontal visual fields. Morphological studies of the hummingbird brain showed that neuronal hypertrophy relatively the largest in any bird exists in a region called the pretectal nucleus lentiformis mesencephali (called the nucleus of the optic tract in mammals) responsible for refining dynamic visual processing while hovering and during rapid flight.
The enlargement of the brain region responsible for visual processing indicates an enhanced ability for perception and processing of fast-moving visual stimuli encountered during rapid forward flight, insect foraging, competitive interactions, and high-speed courtship. A study of broad-tailed hummingbirds indicated that hummingbirds have a fourth color-sensitive visual cone (humans have three) that detects ultraviolet light and enables discrimination of non-spectral colors, possibly having a role in flower identity, courtship displays, territorial defense, and predator evasion. The fourth color cone would extend the range of visible colors for hummingbirds to perceive ultraviolet light and color combinations of feathers and gorgets, colorful plants, and other objects in their environment, enabling detection of as many as five non-spectral colors, including purple, ultraviolet-red, ultraviolet-green, ultraviolet-yellow, and ultraviolet-purple.
Hummingbirds are highly sensitive to stimuli in their visual fields, responding to even minimal motion in any direction by reorienting themselves in midflight. Their visual sensitivity allows them to precisely hover in place while in complex and dynamic natural environments, functions enabled by the lentiform nucleus which is tuned to fast-pattern velocities, enabling highly tuned control and collision avoidance during forward flight.
Song, vocal learning, and hearing
Many hummingbird species exhibit a diverse vocal repertoire of chirps, squeaks, whistles and buzzes. Vocalizations vary in complexity and spectral content during social interactions, foraging, territorial defense, courtship, and mother-nestling communication. Territorial vocal signals may be produced in rapid succession to discourage aggressive encounters, with the chirping rate and loudness increasing when intruders persist. During the breeding season, male and female hummingbirds vocalize as part of courtship.
Hummingbirds exhibit vocal production learning to enable song variation "dialects" that exist across the same species. For example, the blue-throated hummingbird's song differs from typical oscine songs in its wide frequency range, extending from 1.8 kHz to about 30 kHz. It also produces ultrasonic vocalizations which do not function in communication. As blue-throated hummingbirds often alternate singing with catching small flying insects, it is possible the ultrasonic clicks produced during singing disrupt insect flight patterns, making insects more vulnerable to predation. Anna's, Costa's, long-billed hermits, and Andean hummingbirds have song dialects that vary across habitat locations and phylogenetic clades.
The avian vocal organ, the syrinx, plays an important role in understanding hummingbird song production. What makes the hummingbird's syrinx different from that of other birds in the Apodiformes order is the presence of internal muscle structure, accessory cartilages, and a large tympanum that serves as an attachment point for external muscles, all of which are adaptations thought to be responsible for the hummingbird's increased ability in pitch control and large frequency range.
Hummingbird songs originate from at least seven specialized nuclei in the forebrain. A genetic expression study showed that these nuclei enable vocal learning (ability to acquire vocalizations through imitation), a rare trait known to occur in only two other groups of birds (parrots and songbirds) and a few groups of mammals (including humans, whales and dolphins, and bats). Within the past 66 million years, only hummingbirds, parrots, and songbirds out of 23 bird orders may have independently evolved seven similar forebrain structures for singing and vocal learning, indicating that evolution of these structures is under strong epigenetic constraints possibly derived from a common ancestor.
Generally, birds have been assessed to vocalize and hear in the range of 2–5 kHz, with hearing sensitivity falling with higher frequencies. In the Ecuadorian hillstar (Oreotrochilus chimborazo), vocalizations were recorded in the wild to be at a frequency above 10 kHz, well outside of the known hearing ability of most birds. Song system nuclei in the hummingbird brain are similar to those songbird brains, but the hummingbird brain has specialized regions involved for song processing.
Metabolism
Hummingbirds have the highest metabolism of all vertebrate animals – a necessity to support the rapid beating of their wings during hovering and fast forward flight. During flight and hovering, oxygen consumption per gram of muscle tissue in a hummingbird is about 10 times higher than that measured in elite human athletes. Hummingbirds achieve this extraordinary capacity for oxygen consumption by an exceptional density and proximity of capillaries and mitochondria in their flight muscles.
Hummingbirds are rare among vertebrates in their ability to rapidly make use of ingested sugars to fuel energetically expensive hovering flight, powering up to 100% of their metabolic needs with the sugars they drink. Hummingbird flight muscles have extremely high capacities for oxidizing carbohydrates and fatty acids via hexokinase, carnitine palmitoyltransferase, and citrate synthase enzymes at rates that are the highest known for vertebrate skeletal muscle. To sustain rapid wingbeats during flight and hovering, hummingbirds expend the human equivalent of 150,000 calories per day, an amount estimated to be 10 times the energy consumption by a marathon runner in competition.
Hummingbirds can use newly ingested sugars to fuel hovering flight within 30–45 minutes of consumption. These data suggest that hummingbirds are able to oxidize sugar in flight muscles at rates rapid enough to satisfy their extreme metabolic demands as indicated by a 2017 review showing that hummingbirds have in their flight muscles a mechanism for "direct oxidation" of sugars into maximal ATP yield to support a high metabolic rate for hovering, foraging at altitude, and migrating. This adaptation occurred through the evolutionary loss of a key gene, fructose-bisphosphatase 2 (FBP2), coinciding with the onset of hovering by hummingbirds estimated by fossil evidence to be some 35 million years ago. Without FBP2, glycolysis and mitochondrial respiration in flight muscles are enhanced, enabling hummingbirds to metabolize sugar more efficiently for energy.
By relying on newly ingested sugars to fuel flight, hummingbirds reserve their limited fat stores to sustain their overnight fasting during torpor or to power migratory flights. Studies of hummingbird metabolism address how a migrating ruby-throated hummingbird can cross of the Gulf of Mexico on a nonstop flight. This hummingbird, like other long-distance migrating birds, stores fat as a fuel reserve, augmenting its weight by as much as 100%, then enabling metabolic fuel for flying over open water. The amount of fat (1–2 g) used by a migrating hummingbird to cross the Gulf of Mexico in a single flight is similar to that used by a human climbing about .
The heart rate of hummingbirds can reach as high as 1,260 beats per minute, a rate measured in a blue-throated hummingbird with a breathing rate of 250 breaths per minute at rest.
Heat dissipation
The high metabolic rate of hummingbirds – especially during rapid forward flight and hovering – produces increased body heat that requires specialized mechanisms of thermoregulation for heat dissipation, which becomes an even greater challenge in hot, humid climates. Hummingbirds dissipate heat partially by evaporation through exhaled air, and from body structures with thin or no feather covering, such as around the eyes, shoulders, under the wings (patagia), and feet.
While hovering, hummingbirds do not benefit from the heat loss by air convection during forward flight, except for air movement generated by their rapid wing-beat, possibly aiding convective heat loss from the extended feet. Smaller hummingbird species, such as the calliope, appear to adapt their relatively higher surface-to-volume ratio to improve convective cooling from air movement by the wings. When air temperatures rise above , thermal gradients driving heat passively by convective dissipation from around the eyes, shoulders, and feet are reduced or eliminated, requiring heat dissipation mainly by evaporation and exhalation. In cold climates, hummingbirds retract their feet into breast feathers to eliminate skin exposure and minimize heat dissipation.
Kidney function
The dynamic range of metabolic rates in hummingbirds requires a parallel dynamic range in kidney function. During a day of nectar consumption with a corresponding high water intake that may total five times the body weight per day, hummingbird kidneys process water via glomerular filtration rates (GFR) in amounts proportional to water consumption, thereby avoiding overhydration. During brief periods of water deprivation, however, such as in nighttime torpor, GFR drops to zero, preserving body water.
Hummingbird kidneys also have a unique ability to control the levels of electrolytes after consuming nectars with high amounts of sodium and chloride or none, indicating that kidney and glomerular structures must be highly specialized for variations in nectar mineral quality. Morphological studies on Anna's hummingbird kidneys showed adaptations of high capillary density in close proximity to nephrons, allowing for precise regulation of water and electrolytes.
Hemoglobin adaptation to altitude
Dozens of hummingbird species live year-round in tropical mountain habitats at high altitudes, such as in the Andes over ranges of to where the partial pressure of oxygen in the air is reduced, a condition of hypoxic challenge for the high metabolic demands of hummingbirds. In Andean hummingbirds living at high elevations, researchers found that the oxygen-carrying protein in blood hemoglobin had increased oxygen-binding affinity, and that this adaptive effect likely resulted from evolutionary mutations within the hemoglobin molecule via specific amino acid changes due to natural selection.
Adaptation to winter
Anna's hummingbirds are the northernmost year-round residents of any hummingbird. Anna's hummingbirds were recorded in Alaska as early as 1971, and resident in the Pacific Northwest since the 1960s, particularly increasing as a year-round population during the early 21st century. Scientists estimate that some Anna's hummingbirds overwinter and presumably breed at northern latitudes where food and shelter are available throughout winter, tolerating moderately cold winter temperatures.
During cold temperatures, Anna's hummingbirds gradually gain weight during the day as they convert sugar to fat. In addition, hummingbirds with inadequate stores of body fat or insufficient plumage are able to survive periods of subfreezing weather by lowering their metabolic rate and entering a state of torpor.
While their range was originally limited to the chaparral of California and Baja California, it expanded northward to Oregon, Washington, and British Columbia, and east to Arizona over the 1960s to 1970s. This rapid expansion is attributed to the widespread planting of non-native species, such as eucalyptus, as well as the use of urban bird feeders, in combination with the species' natural tendency for extensive postbreeding dispersal. In the Pacific Northwest, the fastest growing populations occur in regions with breeding-season cold temperatures similar to those of its native range. Northward expansion of the Anna's hummingbird represents an ecological release associated with introduced plants, year-round nectar availability from feeders supplied by humans, milder winter temperatures associated with climate change, and acclimation of the species to a winter climate cooler than its native region. Although quantitative data are absent, it is likely that a sizable percentage of Anna's hummingbirds in the Pacific Northwest still do migrate south for winter, as of 2017.
Anna's hummingbird is the official city bird of Vancouver, British Columbia, Canada, and is a non-migrating resident of Seattle where it lives year-round through winter enduring extended periods of subfreezing temperatures, snow, and high winds.
Torpor
The metabolism of hummingbirds can slow at night or at any time when food is not readily available; the birds enter a deep-sleep state (known as torpor) to prevent energy reserves from falling to a critical level. One study of broad-tailed hummingbirds found that body weight decreased linearly throughout torpor at a rate of 0.04 g per hour.
During nighttime torpor, body temperature in a Caribbean hummingbird was shown to fall from 40 to 18 °C, with heart and breathing rates slowing dramatically (heart rate of roughly 50 to 180 bpm from its daytime rate of higher than 1000 bpm). Recordings from a Metallura phoebe hummingbird in noctural torpor at around in the Andes mountains showed that body temperature fell to 3.3 °C (38 °F), the lowest known level for a bird or non-hibernating mammal. During cold nights at altitude, hummingbirds were in torpor for 2–13 hours depending on species, with cooling occurring at the rate of 0.6 °C per minute and rewarming at 1–1.5 °C per minute. High-altitude Andean hummingbirds also lost body weight in negative proportion to how long the birds were in torpor, losing about 6% of weight each night.
During torpor, to prevent dehydration, the kidney function declines, preserving needed compounds, such as glucose, water, and nutrients. The circulating hormone, corticosterone, is one signal that arouses a hummingbird from torpor.
Use and duration of torpor vary among hummingbird species and are affected by whether a dominant bird defends territory, with nonterritorial subordinate birds having longer periods of torpor. A hummingbird with a higher fat percentage will be less likely to enter a state of torpor compared to one with less fat, as a bird can use the energy from its fat stores. Torpor in hummingbirds appears to be unrelated to nighttime temperature, as it occurs across a wide temperature range, with energy savings of such deep sleep being more related to the photoperiod and duration of torpor.
Lifespan
Hummingbirds have unusually long lifespans for organisms with such rapid metabolisms. Though many die during their first year of life, especially in the vulnerable period between hatching and fledging, those that survive may occasionally live a decade or more. Among the better-known North American species, the typical lifespan is probably 3 to 5 years. For comparison, the smaller shrews, among the smallest of all mammals, seldom live longer than 2 years. The longest recorded lifespan in the wild relates to a female broad-tailed hummingbird that was banded as an adult at least one year old, then recaptured 11 years later, making her at least 12 years old. Other longevity records for banded hummingbirds include an estimated minimum age of 10 years 1 month for a female black-chinned hummingbird similar in size to the broad-tailed hummingbird, and at least 11 years 2 months for a much larger buff-bellied hummingbird.
Natural enemies
Predators
Praying mantises have been observed as predators of hummingbirds. Other predators include domestic cats, dragonflies, frogs, orb-weaver spiders, and other birds, such as the roadrunner.
Parasites
Hummingbirds host a highly specialized lice fauna. Two genera of Ricinid lice, Trochiloecetes and Trochiliphagus, are specialized on them, often infesting 5–15% of their populations. In contrast, two genera of Menoponid lice, Myrsidea and Leremenopon, are extremely rare on them.
Reproduction
Male hummingbirds do not take part in nesting. Most species build a cup-shaped nest on the branch of a tree or shrub. The nest varies in size relative to the particular species – from smaller than half a walnut shell to several centimeters in diameter.
Many hummingbird species use spider silk and lichen to bind the nest material together and secure the structure. The unique properties of the silk allow the nest to expand as the young hummingbirds grow. Two white eggs are laid, which despite being the smallest of all bird eggs, are large relative to the adult hummingbird's size. Incubation lasts 14 to 23 days, depending on the species, ambient temperature, and female attentiveness to the nest. The mother feeds her nestlings on small arthropods and nectar by inserting her bill into the open mouth of a nestling, and then regurgitating the food into its crop. Hummingbirds stay in the nest for 18–22 days, after which they leave the nest to forage on their own, although the mother bird may continue feeding them for another 25 days.
Flight
Hummingbird flight has been studied intensively from an aerodynamic perspective using wind tunnels and high-speed video cameras. Two studies of rufous or Anna's hummingbirds in a wind tunnel used particle image velocimetry techniques to investigate the lift generated on the bird's upstroke and downstroke. The birds produced 75% of their weight support during the downstroke and 25% during the upstroke, with the wings making a "figure 8" motion.
Many earlier studies had assumed that lift was generated equally during the two phases of the wingbeat cycle, as is the case of insects of a similar size. This finding shows that hummingbird hovering is similar to, but distinct from, that of hovering insects such as the hawk moth. Further studies using electromyography in hovering rufous hummingbirds showed that muscle strain in the pectoralis major (principal downstroke muscle) was the lowest yet recorded in a flying bird, and the primary upstroke muscle (supracoracoideus) is proportionately larger than in other bird species. Presumably due to rapid wingbeats for flight and hovering, hummingbird wings have adapted to perform without an alula.
The giant hummingbird's wings beat as few as 12 times per second, and the wings of typical hummingbirds beat up to 80 times per second. As air density decreases, for example, at higher altitudes, the amount of power a hummingbird must use to hover increases. Hummingbird species adapted for life at higher altitudes, therefore, have larger wings to help offset these negative effects of low air density on lift generation.
A slow-motion video has shown how the hummingbirds deal with rain when they are flying. To remove the water from their heads, they shake their heads and bodies, similar to a dog shaking, to shed water. Further, when raindrops collectively may weigh as much as 38% of the bird's body weight, hummingbirds shift their bodies and tails horizontally, beat their wings faster, and reduce their wings' angle of motion when flying in heavy rain.
Wingbeats and flight stability
The highest recorded wingbeat rate for hummingbirds during hovering is 99.1 per second, as measured for male woodstars (Chaetocercus sp.). Males in the genus Chaetocercus have been recorded above 100 beats per second during courtship displays. The number of beats per second increases above "normal" hovering while flying during courtship displays (up to 90 per second for the calliope hummingbird, Selasphorus calliope); a wingbeat rate 40% higher than its typical hovering rate.
During turbulent airflow conditions created experimentally in a wind tunnel, hummingbirds exhibit stable head positions and orientation when they hover at a feeder. When wind gusts from the side, hummingbirds compensate by increasing wing-stroke amplitude and stroke plane angle and by varying these parameters asymmetrically between the wings and from one stroke to the next. They also vary the orientation and enlarge the collective surface area of their tail feathers into the shape of a fan. While hovering, the visual system of a hummingbird is able to separate apparent motion caused by the movement of the hummingbird itself from motions caused by external sources, such as an approaching predator. In natural settings full of highly complex background motion, hummingbirds are able to precisely hover in place by rapid coordination of vision with body position.
Feather sounds
Courtship dives
When courting, the male Anna's hummingbird ascends some above a female, before diving at a speed of , equal to 385 body lengths/sec – producing a high-pitched sound near the female at the nadir of the dive. This downward acceleration during a dive is the highest reported for any vertebrate undergoing a voluntary aerial maneuver; in addition to acceleration, the speed relative to body length is the highest known for any vertebrate. For instance, it is about twice the diving speed of peregrine falcons in pursuit of prey. At maximum descent speed, about 10 g of gravitational force occur in the courting hummingbird during a dive (Note: G-force is generated as the bird pulls out of the dive).
The outer tail feathers of male Anna's (Calypte anna) and Selasphorus hummingbirds (e.g., Allen's, calliope) vibrate during courtship display dives and produce an audible chirp caused by aeroelastic flutter. Hummingbirds cannot make the courtship dive sound when missing their outer tail feathers, and those same feathers could produce the dive sound in a wind tunnel. The bird can sing at the same frequency as the tail-feather chirp, but its small syrinx is not capable of the same volume. The sound is caused by the aerodynamics of rapid air flow past tail feathers, causing them to flutter in a vibration, which produces the high-pitched sound of a courtship dive.
Many other species of hummingbirds also produce sounds with their wings or tails while flying, hovering, or diving, including the wings of the calliope hummingbird, broad-tailed hummingbird, rufous hummingbird, Allen's hummingbird, and the streamertail species, as well as the tail of the Costa's hummingbird and the black-chinned hummingbird, and a number of related species. The harmonics of sounds during courtship dives vary across species of hummingbirds.
Wing feather trill
Male rufous and broad-tailed hummingbirds (genus Selasphorus) have a distinctive wing feature during normal flight that sounds like jingling or a buzzing shrill whistle a trill. The trill arises from air rushing through slots created by the tapered tips of the ninth and tenth primary wing feathers, creating a sound loud enough to be detected by female or competitive male hummingbirds and researchers up to 100 m away.
Behaviorally, the trill serves several purposes: It announces the sex and presence of a male bird; it provides audible aggressive defense of a feeding territory and an intrusion tactic; it enhances communication of a threat; and it favors mate attraction and courtship.
Migration
Relatively few hummingbirds migrate as a percentage of the total number of species; of the roughly 366 known hummingbird species, only 12–15 species migrate annually, particularly those in North America. Most hummingbirds live in the Amazonia-Central America tropical rainforest belt, where seasonal temperature changes and food sources are relatively constant, obviating the need to migrate. As the smallest living birds, hummingbirds are relatively limited at conserving heat energy, and are generally unable to maintain a presence in higher latitudes during winter months, unless the specific location has a large food supply throughout the year, particularly access to flower nectar. Other migration factors are seasonal fluctuation of food, climate, competition for resources, predators, and inherent signals.
Most North American hummingbirds migrate southward in fall to spend winter in Mexico, the Caribbean Islands, or Central America. A few species are year-round residents of Florida, California, and the southwestern desert regions of the US. Among these are Anna's hummingbird, a common resident from southern Arizona and inland California, and the buff-bellied hummingbird, a winter resident from Florida across the Gulf Coast to South Texas. Ruby-throated hummingbirds are common along the Atlantic flyway, and migrate in summer from as far north as Atlantic Canada, returning to Mexico, South America, southern Texas, and Florida to winter. During winter in southern Louisiana, black-chinned, buff-bellied, calliope, Allen's, Anna's, ruby-throated, rufous, broad-tailed, and broad-billed hummingbirds are present.
The rufous hummingbird breeds farther north than any other species of hummingbird, spending summers along coastal British Columbia and Alaska, and wintering in the southwestern United States and Mexico, with some distributed along the coasts of the subtropical Gulf of Mexico and Florida. By migrating in spring as far north as the Yukon or southern Alaska, the rufous hummingbird migrates more extensively and nests farther north than any other hummingbird species, and must tolerate occasional temperatures below freezing in its breeding territory. This cold hardiness enables it to survive temperatures below freezing, provided that adequate shelter and food are available.
As calculated by displacement of body size, the rufous hummingbird makes perhaps the longest migratory journey of any bird in the world. At just over long, rufous hummingbirds travel one-way from Alaska to Mexico in late summer, a distance equal to 78,470,000 body lengths, then make the return journey in the following spring. By comparison, the -long Arctic tern makes a one-way flight of about , or 51,430,000 body lengths, just 65% of the body displacement during migration by rufous hummingbirds.
The northward migration of rufous hummingbirds occurs along the Pacific flyway, and may be time-coordinated with flower and tree-leaf emergence in early spring, and also with availability of insects as food. Arrival at breeding grounds before nectar availability from mature flowers may jeopardize breeding opportunities.
Feeding
All hummingbirds are overwhelmingly nectarivorous, being by far the most specialized such feeders among birds, as well as the only birds for whom nectar typically comprises the vast majority of energy intake. Hummingbirds exhibit numerous and extensive adaptations to nectarivory, including long, probing bills and tongues which rapidly take up fluids. Hummingbirds also possess the most sophisticated hovering flight of all birds, a necessity for rapidly visiting many flowers without perching. Their intestines are capable of extracting over 99% of the glucose from nectar feedings within minutes, owing to high densities of glucose transporters (the highest known among vertebrates).
As among the most important vertebrate pollinators, hummingbirds have coevolved in complex ways with flowering plants; thousands of New World species have evolved to be pollinated exclusively by hummingbirds, even barring access to insect pollinators. In some plants these mechanisms, which include highly modified corollas, even render their nectaries inaccessible to all but certain hummingbirds, i.e., those possessing appropriate beak morphologies (although some hummingbirds rob nectar to overcome this). Bird-pollinated plants (also termed "ornithophilous") were formerly thought to exemplify very close mutualisms, with specific flowering plants coevolving alongside specific hummingbirds in mutualistic pairings. Both ornithophilous plants and hummingbirds are now known to not be nearly selective enough for this to be true. Less accessible ornithophiles (for example, those requiring long bills) still rely on multiple hummingbird species for pollination. More importantly, hummingbirds tend not to be especially selective nectar-feeders, even regularly visiting non-ornithophilous plants, as well as ornithophiles which appear poorly suited for feeding by their species. Feeding efficiency is optimized, however, when birds feed on flowers better suited to their bill morphologies.
Although they may not be one-to-one, there are still marked overall preferences for certain genera, families, and orders of flowering plants by hummingbirds in general, as well as by certain species of hummingbird. Flowers which are attractive to hummingbirds are often colorful (particularly red), open diurnally, and produce nectar with a high sucrose content; in ornithophilous plants, the corollas are often elongated and tubular, and they may be scentless (several of these are adaptations discouraging insect visitation). Some common genera consumed by many species include Castilleja, Centropogon, Costus, Delphinium, Heliconia, Hibiscus, Inga, and Mimulus; some of these are primarily insect-pollinated. Three Californian species were found to feed from 62 plant families in 30 orders, with the most frequently occurring orders being Apiales, Fabales, Lamiales, and Rosales. A hummingbird may have to visit one or two thousand flowers daily to meet energy demands.
Although a high-quality source of energy, nectar is deficient in many macro- and micronutrients; it tends to be low in lipids, and although it may contain trace quantities of amino acids, some essential acids are severely or entirely lacking. Though hummingbird protein requirements appear to be quite small, at 1.5% of the diet, nectar is still an inadequate source; most if not all hummingbirds therefore supplement their diet with the consumption of invertebrates. Insectivory is not thought to be calorically important; nonetheless, regular consumption of arthropods is considered crucial for birds to thrive. In fact, it has been suggested that the majority of non-caloric nutritional needs of hummingbirds are met by insectivory, but nectars do contain appreciable quantities of certain vitamins and minerals. (Here, "insectivory" refers to the consumption of any arthropod, not exclusively insects).
Though not as insectivorous as once believed, and far less so than most of their relatives and ancestors among the Strisores (e.g., swifts), insectivory is probably of regular importance to most hummingbirds. About 95% of individuals from 140 species in one study showed evidence of arthropod consumption, while another study found arthropod remains in 79% of over 1600 birds from sites across South and Central America. Some species have even been recorded to be largely or entirely insectivorous for periods of time, particularly when nectar sources are scarce, and possibly, for some species, with seasonal regularity in areas with a wet season. Observations of seasonal, near-exclusive insectivory have been made for blue-throated hummingbirds, as well as swallow-tailed hummingbirds in an urban park in Brazil. In Arizona, when nearby nectar sources were seemingly absent, a nesting female broad-tailed hummingbird was recorded feeding only on arthropods for two weeks. Other studies report 70–100% of feeding time devoted to arthropods; these accounts suggest a degree of adaptability, particularly when appropriate nectar sources are unavailable, although nectarivory always predominates when flowers are abundant (e.g., in non-seasonal tropical habitats). In addition, the aforementioned Arizona study only surveyed a small portion of the study area, and mostly did not observe the bird while she was off the nest. Similar concerns have been raised for other reports, leading to skepticism over whether hummingbirds can in fact subsist without nectar for extended periods at all.
Hummingbirds exhibit various feeding strategies and some morphological adaptations for insectivory. Typically, they hawk for small flying insects, but also glean spiders from their webs. Bill shape may play a role, as hummingbirds with longer or more curved bills may be unable to hawk efficiently, and so rely more heavily on gleaning spiders. Regardless of bill shape, spiders are a common prey item; other very common prey items include flies, especially those of the family Chironomidae, as well as various Hymenopterans (such as wasps and ants) and Hemipterans. The aforementioned California study found three species to consume invertebrates from 72 families in 15 orders, with flies alone occurring in over 90% of samples; the three species exhibited high dietary overlap, with little evidence for niche partitioning. This suggests that prey availability is not a limiting resource for hummingbirds.
Estimates of overall dietary makeup for hummingbirds vary, but insectivory is often cited as comprising 5–15% of feeding time budgets, typically; 2–12% is a figure that is also cited. In one study, 84% of feeding time was allotted to nectar feeding if breeding females are included, and 89% otherwise; 86% of total feeding records were on nectar. It has been estimated, based on time budgets and other data, that the hummingbird diet is generally about 90% nectar and 10% arthropods by mass. As their nestlings consume only arthropods, and possibly because their own requirements increase, breeding females spend 3–4 times as long as males foraging for arthropods, although 65–70% of their feeding time is still devoted to nectar. Estimates for overall insectivory can be as low as <5%. Such low numbers have been documented for some species; insects comprised 3% of foraging attempts for Peruvian shining sunbeams in one study, while the purple-throated carib has been reported to spend <1% of time consuming insects in Dominica. Both species also have more typical numbers recorded elsewhere, however. Overall, for most hummingbirds, insectivory is an essential and regular, albeit minor, component of the diet, while nectar is the primary feeding focus when conditions allow. It has been shown that floral abundance (but not floral diversity) influences hummingbird diversity, but that arthropod abundance does not (i.e., that it is non-limiting).
Hummingbirds do not spend all day flying, as the energy cost would be prohibitive; the majority of their activity consists simply of sitting or perching. Hummingbirds eat many small meals and consume around half their weight in nectar (twice their weight in nectar, if the nectar is 25% sugar) each day. Hummingbirds digest their food rapidly due to their small size and high metabolism; a mean retention time less than an hour has been reported. Hummingbirds spend an average of 20% of their time feeding and 75–80% sitting and digesting.
Because their high metabolism makes them vulnerable to starvation, hummingbirds are highly attuned to food sources. Some species, including many found in North America, are territorial and try to guard food sources (such as a feeder) against other hummingbirds, attempting to ensure a future food supply. Additionally, hummingbirds have an enlarged hippocampus, a brain region facilitating spatial memory used to map flowers previously visited during nectar foraging.
Beak specializations
The shapes of hummingbird beaks (also called bills) vary widely as an adaptation for specialized feeding, with some 7000 flowering plants pollinated by hummingbird nectar feeding. Hummingbird beak lengths range from about to as long as . When catching insects in flight, a hummingbird's jaw flexes downward to widen the beak for successful capture.
The extreme curved beaks of sicklebills are adapted for extracting nectar from the curved corolla tubes of Centropogon flowers. Some species, such as hermits (Phaethornis spp.), have long beaks that enable insertion deeply into flowers with long corolla tubes. Thornbills have short, sharp beaks adapted for feeding from flowers with short corolla tubes and piercing the bases of longer ones. The beak of the fiery-tailed awlbill has an upturned tip adapted for feeding on nectar from tubular flowers while hovering.
Perception of sweet nectar
Perception of sweetness in nectar evolved in hummingbirds during their genetic divergence from insectivorous swifts, their closest bird relatives. Although the only known sweet sensory receptor, called T1R2, is absent in birds, receptor expression studies showed that hummingbirds adapted a carbohydrate receptor from the T1R1-T1R3 receptor, identical to the one perceived as umami in humans, essentially repurposing it to function as a nectar sweetness receptor. This adaptation for taste enabled hummingbirds to detect and exploit sweet nectar as an energy source, facilitating their distribution across geographical regions where nectar-bearing flowers are available.
Tongue as a micropump
Hummingbirds drink with their long tongues by rapidly lapping nectar. Their tongues have semicircular tubes which run down their lengths to facilitate nectar consumption via rapid pumping in and out of the nectar. While capillary action was believed to be what drew nectar into these tubes, high-speed photography revealed that the tubes open down their sides as the tongue goes into the nectar, and then close around the nectar, trapping it so it can be pulled back into the beak over a period of 14 milliseconds per lick at a rate of up to 20 licks per second. The tongue, which is forked, is compressed until it reaches nectar, then the tongue springs open, the rapid action traps the nectar which moves up the grooves, like a pump action, with capillary action not involved. Consequently, tongue flexibility enables accessing, transporting and unloading nectar via pump action, not by a capillary syphon as once believed.
Feeders and artificial nectar
In the wild, hummingbirds visit flowers for food, extracting nectar, which is 55% sucrose, 24% glucose, and 21% fructose on a dry-matter basis. Hummingbirds also take sugar-water from bird feeders, which allow people to observe and enjoy hummingbirds up close while providing the birds with a reliable source of energy, especially when flower blossoms are less abundant. A negative aspect of artificial feeders, however, is that the birds may seek less flower nectar for food, and so may reduce the amount of pollination their feeding naturally provides.
White granulated sugar is used in hummingbird feeders in a 20% concentration as a common recipe, although hummingbirds will defend feeders more aggressively when sugar content is at 35%, indicating preference for nectar with higher sugar content. Organic and "raw" sugars contain iron, which can be harmful, and brown sugar, agave syrup, molasses, and artificial sweeteners also should not be used. Honey is made by bees from the nectar of flowers, but it is not good to use in feeders because when it is diluted with water, microorganisms easily grow in it, causing it to spoil rapidly.
Red food dye was once thought to be a favorable ingredient for the nectar in home feeders, but it is unnecessary. Commercial products sold as "instant nectar" or "hummingbird food" may also contain preservatives or artificial flavors, as well as dyes, which are unnecessary and potentially harmful. Although some commercial products contain small amounts of nutritional additives, hummingbirds obtain all necessary nutrients from the insects they eat, rendering added nutrients unnecessary.
Visual cues of foraging
Hummingbirds have exceptional visual acuity providing them with discrimination of food sources while foraging. Although hummingbirds are thought to be attracted to color while seeking food, such as red flowers or artificial feeders, experiments indicate that location and flower nectar quality are the most important "beacons" for foraging. Hummingbirds depend little on visual cues of flower color to beacon to nectar-rich locations, but rather they use surrounding landmarks to find the nectar reward.
In at least one hummingbird species – the green-backed firecrown (Sephanoides sephaniodes) – flower colors preferred are in the red-green wavelength for the bird's visual system, providing a higher contrast than for other flower colors. Further, the crown plumage of firecrown males is highly iridescent in the red wavelength range (peak at 650 nanometers), possibly providing a competitive advantage of dominance when foraging among other hummingbird species with less colorful plumage. The ability to discriminate colors of flowers and plumage is enabled by a visual system having four single cone cells and a double cone screened by photoreceptor oil droplets which enhance color discrimination.
Olfaction
While hummingbirds rely primarily on vision and hearing to assess competition from bird and insect foragers near food sources, they may also be able to detect by smell the presence in nectar of insect defensive chemicals (such as formic acid) and aggregation pheromones of foraging ants, which discourage feeding.
In myth and culture
Aztecs wore hummingbird talismans, artistic representations of hummingbirds and fetishes made from actual hummingbird parts as emblematic for vigor, energy, and propensity to do work along with their sharp beaks that symbolically mimic instruments of weaponry, bloodletting, penetration, and intimacy. Hummingbird talismans were prized as drawing sexual potency, energy, vigor, and skill at arms and warfare to the wearer. The Aztec god of war Huitzilopochtli is often depicted in art as a hummingbird. Aztecs believed that fallen warriors would be reincarnated as hummingbirds. The Nahuatl word huitzil translates to hummingbird.
One of the Nazca Lines depicts a hummingbird (right).
Trinidad and Tobago, known as "The land of the hummingbird", displays a hummingbird on its coat of arms, 1-cent coin, and livery on its national airline, Caribbean Airlines.
Mt. Umunhum in the Santa Cruz Mountains of Northern California is Ohlone for "resting place of the hummingbird".
The Gibson Hummingbird is an acoustic guitar model that incorporates a pickguard in the shape of a hummingbird by Gibson Brands, a major guitar manufacturer.
During the costume competition of the Miss Universe 2016 beauty pageant, Miss Ecuador, Connie Jiménez, wore a costume inspired by hummingbird wing feathers.
Gallery
| Biology and health sciences | Apodiformes | null |
70657 | https://en.wikipedia.org/wiki/Van%20der%20Waals%20force | Van der Waals force | In molecular physics and chemistry, the van der Waals force (sometimes van der Waals' force) is a distance-dependent interaction between atoms or molecules. Unlike ionic or covalent bonds, these attractions do not result from a chemical electronic bond; they are comparatively weak and therefore more susceptible to disturbance. The van der Waals force quickly vanishes at longer distances between interacting molecules.
Named after Dutch physicist Johannes Diderik van der Waals, the van der Waals force plays a fundamental role in fields as diverse as supramolecular chemistry, structural biology, polymer science, nanotechnology, surface science, and condensed matter physics. It also underlies many properties of organic compounds and molecular solids, including their solubility in polar and non-polar media.
If no other force is present, the distance between atoms at which the force becomes repulsive rather than attractive as the atoms approach one another is called the van der Waals contact distance; this phenomenon results from the mutual repulsion between the atoms' electron clouds.
The van der Waals forces are usually described as a combination of the London dispersion forces between "instantaneously induced dipoles", Debye forces between permanent dipoles and induced dipoles, and the Keesom force between permanent molecular dipoles whose rotational orientations are dynamically averaged over time.
Definition
Van der Waals forces include attraction and repulsions between atoms, molecules, as well as other intermolecular forces. They differ from covalent and ionic bonding in that they are caused by correlations in the fluctuating polarizations of nearby particles (a consequence of quantum dynamics).
The force results from a transient shift in electron density. Specifically, the electron density may temporarily shift to be greater on one side of the nucleus. This shift generates a transient charge which a nearby atom can be attracted to or repelled by. The force is repulsive at very short distances, reaches zero at an equilibrium distance characteristic for each atom, or molecule, and becomes attractive for distances larger than the equilibrium distance. For individual atoms, the equilibrium distance is between 0.3 nm and 0.5 nm, depending on the atomic-specific diameter. When the interatomic distance is greater than 1.0 nm the force is not strong enough to be easily observed as it decreases as a function of distance r approximately with the 7th power (~r−7).
Van der Waals forces are often among the weakest chemical forces. For example, the pairwise attractive van der Waals interaction energy between H (hydrogen) atoms in different H2 molecules equals 0.06 kJ/mol (0.6 meV) and the pairwise attractive interaction energy between O (oxygen) atoms in different O2 molecules equals 0.44 kJ/mol (4.6 meV). The corresponding vaporization energies of H2 and O2 molecular liquids, which result as a sum of all van der Waals interactions per molecule in the molecular liquids, amount to 0.90 kJ/mol (9.3 meV) and 6.82 kJ/mol (70.7 meV), respectively, and thus approximately 15 times the value of the individual pairwise interatomic interactions (excluding covalent bonds).
The strength of van der Waals bonds increases with higher polarizability of the participating atoms. For example, the pairwise van der Waals interaction energy for more polarizable atoms such as S (sulfur) atoms in H2S and sulfides exceeds 1 kJ/mol (10 meV), and the pairwise interaction energy between even larger, more polarizable Xe (xenon) atoms is 2.35 kJ/mol (24.3 meV). These van der Waals interactions are up to 40 times stronger than in H2, which has only one valence electron, and they are still not strong enough to achieve an aggregate state other than gas for Xe under standard conditions. The interactions between atoms in metals can also be effectively described as van der Waals interactions and account for the observed solid aggregate state with bonding strengths comparable to covalent and ionic interactions. The strength of pairwise van der Waals type interactions is on the order of 12 kJ/mol (120 meV) for low-melting Pb (lead) and on the order of 32 kJ/mol (330 meV) for high-melting Pt (platinum), which is about one order of magnitude stronger than in Xe due to the presence of a highly polarizable free electron gas. Accordingly, van der Waals forces can range from weak to strong interactions, and support integral structural loads when multitudes of such interactions are present.
Force contributions
More broadly, intermolecular forces have several possible contributions. They are ordered from strongest to weakest:
A repulsive component resulting from the Pauli exclusion principle that prevents close contact of atoms, or the collapse of molecules.
Attractive or repulsive electrostatic interactions between permanent charges (in the case of molecular ions), dipoles (in the case of molecules without inversion centre), quadrupoles (all molecules with symmetry lower than cubic), and in general between permanent multipoles. These interactions also include hydrogen bonds, cation-pi, and pi-stacking interactions. Orientation-averaged contributions from electrostatic interactions are sometimes called the Keesom interaction or Keesom force after Willem Hendrik Keesom.
Induction (also known as polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced multipole on another. This interaction is sometimes called Debye force after Peter J. W. Debye. The interactions (2) and (3) are labelled polar Interactions.
Dispersion (usually named London dispersion interactions after Fritz London), which is the attractive interaction between any pair of molecules, including non-polar atoms, arising from the interactions of instantaneous multipoles.
When to apply the term "van der Waals" force depends on the text. The broadest definitions include all intermolecular forces which are electrostatic in origin, namely (2), (3) and (4). Some authors, whether or not they consider other forces to be of van der Waals type, focus on (3) and (4) as these are the components which act over the longest range.
All intermolecular/van der Waals forces are anisotropic (except those between two noble gas atoms), which means that they depend on the relative orientation of the molecules. The induction and dispersion interactions are always attractive, irrespective of orientation, but the electrostatic interaction changes sign upon rotation of the molecules. That is, the electrostatic force can be attractive or repulsive, depending on the mutual orientation of the molecules. When molecules are in thermal motion, as they are in the gas and liquid phase, the electrostatic force is averaged out to a large extent because the molecules thermally rotate and thus probe both repulsive and attractive parts of the electrostatic force. Random thermal motion can disrupt or overcome the electrostatic component of the van der Waals force but the averaging effect is much less pronounced for the attractive induction and dispersion forces.
The Lennard-Jones potential is often used as an approximate model for the isotropic part of a total (repulsion plus attraction) van der Waals force as a function of distance.
Van der Waals forces are responsible for certain cases of pressure broadening (van der Waals broadening) of spectral lines and the formation of van der Waals molecules. The London–van der Waals forces are related to the Casimir effect for dielectric media, the former being the microscopic description of the latter bulk property. The first detailed calculations of this were done in 1955 by E. M. Lifshitz. A more general theory of van der Waals forces has also been developed.
The main characteristics of van der Waals forces are:
They are weaker than normal covalent and ionic bonds.
The van der Waals forces are additive in nature, consisting of several individual interactions, and cannot be saturated.
They have no directional characteristic.
They are all short-range forces and hence only interactions between the nearest particles need to be considered (instead of all the particles). Van der Waals attraction is greater if the molecules are closer.
Van der Waals forces are independent of temperature except for dipole-dipole interactions.
In low molecular weight alcohols, the hydrogen-bonding properties of their polar hydroxyl group dominate other weaker van der Waals interactions. In higher molecular weight alcohols, the properties of the nonpolar hydrocarbon chain(s) dominate and determine their solubility.
Van der Waals forces are also responsible for the weak hydrogen bond interactions between unpolarized dipoles particularly in acid-base aqueous solution and between biological molecules.
London dispersion force
London dispersion forces, named after the German-American physicist Fritz London, are weak intermolecular forces that arise from the interactive forces between instantaneous multipoles in molecules without permanent multipole moments. In and between organic molecules the multitude of contacts can lead to larger contribution of dispersive attraction, particularly in the presence of heteroatoms. London dispersion forces are also known as 'dispersion forces', 'London forces', or 'instantaneous dipole–induced dipole forces'. The strength of London dispersion forces is proportional to the polarizability of the molecule, which in turn depends on the total number of electrons and the area over which they are spread. Hydrocarbons display small dispersive contributions, the presence of heteroatoms lead to increased LD forces as function of their polarizability, e.g. in the sequence RI>RBr>RCl>RF. In absence of solvents weakly polarizable hydrocarbons form crystals due to dispersive forces; their sublimation heat is a measure of the dispersive interaction.
Van der Waals forces between macroscopic objects
For macroscopic bodies with known volumes and numbers of atoms or molecules per unit volume, the total van der Waals force is often computed based on the "microscopic theory" as the sum over all interacting pairs. It is necessary to integrate over the total volume of the object, which makes the calculation dependent on the objects' shapes. For example, the van der Waals interaction energy between spherical bodies of radii R1 and R2 and with smooth surfaces was approximated in 1937 by Hamaker (using London's famous 1937 equation for the dispersion interaction energy between atoms/molecules as the starting point) by:
where A is the Hamaker coefficient, which is a constant (~10−19 − 10−20 J) that depends on the material properties (it can be positive or negative in sign depending on the intervening medium), and z is the center-to-center distance; i.e., the sum of R1, R2, and r (the distance between the surfaces): .
The van der Waals force between two spheres of constant radii (R1 and R2 are treated as parameters) is then a function of separation since the force on an object is the negative of the derivative of the potential energy function,. This yields:
In the limit of close-approach, the spheres are sufficiently large compared to the distance between them; i.e., or , so that equation (1) for the potential energy function simplifies to:
with the force:
The van der Waals forces between objects with other geometries using the Hamaker model have been published in the literature.
From the expression above, it is seen that the van der Waals force decreases with decreasing size of bodies (R). Nevertheless, the strength of inertial forces, such as gravity and drag/lift, decrease to a greater extent. Consequently, the van der Waals forces become dominant for collections of very small particles such as very fine-grained dry powders (where there are no capillary forces present) even though the force of attraction is smaller in magnitude than it is for larger particles of the same substance. Such powders are said to be cohesive, meaning they are not as easily fluidized or pneumatically conveyed as their more coarse-grained counterparts. Generally, free-flow occurs with particles greater than about 250 μm.
The van der Waals force of adhesion is also dependent on the surface topography. If there are surface asperities, or protuberances, that result in a greater total area of contact between two particles or between a particle and a wall, this increases the van der Waals force of attraction as well as the tendency for mechanical interlocking.
The microscopic theory assumes pairwise additivity. It neglects many-body interactions and retardation. A more rigorous approach accounting for these effects, called the "macroscopic theory", was developed by Lifshitz in 1956. Langbein derived a much more cumbersome "exact" expression in 1970 for spherical bodies within the framework of the Lifshitz theory while a simpler macroscopic model approximation had been made by Derjaguin as early as 1934. Expressions for the van der Waals forces for many different geometries using the Lifshitz theory have likewise been published.
Use by geckos and arthropods
The ability of geckos – which can hang on a glass surface using only one toe – to climb on sheer surfaces has been for many years mainly attributed to the van der Waals forces between these surfaces and the spatulae, or microscopic projections, which cover the hair-like setae found on their footpads.
There were efforts in 2008 to create a dry glue that exploits the effect, and success was achieved in 2011 to create an adhesive tape on similar grounds (i.e. based on van der Waals forces). In 2011, a paper was published relating the effect to both velcro-like hairs and the presence of lipids in gecko footprints.
A later study suggested that capillary adhesion might play a role, but that hypothesis has been rejected by more recent studies.
A 2014 study has shown that gecko adhesion to smooth Teflon and polydimethylsiloxane surfaces is mainly determined by electrostatic interaction (caused by contact electrification), not van der Waals or capillary forces.
Among the arthropods, some spiders have similar setae on their scopulae or scopula pads, enabling them to climb or hang upside-down from extremely smooth surfaces such as glass or porcelain.
| Physical sciences | Supramolecular chemistry | Chemistry |
70663 | https://en.wikipedia.org/wiki/Brine | Brine | Brine (or briny water) is a high-concentration solution of salt (typically sodium chloride or calcium chloride) in water. In diverse contexts, brine may refer to the salt solutions ranging from about 3.5% (a typical concentration of seawater, on the lower end of that of solutions used for brining foods) up to about 26% (a typical saturated solution, depending on temperature). Brine forms naturally due to evaporation of ground saline water but it is also generated in the mining of sodium chloride. Brine is used for food processing and cooking (pickling and brining), for de-icing of roads and other structures, and in a number of technological processes. It is also a by-product of many industrial processes, such as desalination, so it requires wastewater treatment for proper disposal or further utilization (fresh water recovery).
In nature
Brines are produced in multiple ways in nature. Modification of seawater via evaporation results in the concentration of salts in the residual fluid, a characteristic geologic deposit called an evaporite is formed as different dissolved ions reach the saturation states of minerals, typically gypsum and halite. Dissolution of such salt deposits into water can produce brines as well. As seawater freezes, dissolved ions tend to remain in solution resulting in a fluid termed a cryogenic brine. At the time of formation, these cryogenic brines are by definition cooler than the freezing temperature of seawater and can produce a feature called a brinicle where cool brines descend, freezing the surrounding seawater.
The brine cropping out at the surface as saltwater springs are known as "licks" or "salines". The contents of dissolved solids in groundwater vary highly from one location to another on Earth, both in terms of specific constituents (e.g. halite, anhydrite, carbonates, gypsum, fluoride-salts, organic halides, and sulfate-salts) and regarding the concentration level. Using one of several classification of groundwater based on total dissolved solids (TDS), brine is water containing more than 100,000 mg/L TDS. Brine is commonly produced during well completion operations, particularly after the hydraulic fracturing of a well.
Uses
Culinary
Brine is a common agent in food processing and cooking. Brining is used to preserve or season the food. Brining can be applied to vegetables, cheeses, fruit and some fish in a process known as pickling. Meat and fish are typically steeped in brine for shorter periods of time, as a form of marination, enhancing its tenderness and flavor, or to enhance shelf period.
Chlorine production
Elemental chlorine can be produced by electrolysis of brine (NaCl solution). This process also produces sodium hydroxide (NaOH) and hydrogen gas (H2). The reaction equations are as follows:
Cathode:
Anode:
Overall process:
Refrigerating fluid
Brine is used as a secondary fluid in large refrigeration installations for the transport of thermal energy. Most commonly used brines are based on inexpensive calcium chloride and sodium chloride. It is used because the addition of salt to water lowers the freezing temperature of the solution and the heat transport efficiency can be greatly enhanced for the comparatively low cost of the material. The lowest freezing point obtainable for NaCl brine is at the concentration of 23.3% NaCl by weight. This is called the eutectic point.
Because of their corrosive properties salt-based brines have been replaced by organic liquids such as ethylene glycol.
Sodium chloride brine spray is used on some fishing vessels to freeze fish. The brine temperature is generally . Air blast freezing temperatures are or lower. Given the higher temperature of brine, the system efficiency over air blast freezing can be higher. High-value fish usually are frozen at much lower temperatures, below the practical temperature limit for brine.
Water softening and purification
Brine is an auxiliary agent in water softening and water purification systems involving ion exchange technology. The most common example are household dishwashers, utilizing sodium chloride in form of dishwasher salt. Brine is not involved in the purification process itself, but used for regeneration of ion-exchange resin on cyclical basis. The water being treated flows through the resin container until the resin is considered exhausted and water is purified to a desired level. Resin is then regenerated by sequentially backwashing the resin bed to remove accumulated solids, flushing removed ions from the resin with a concentrated solution of replacement ions, and rinsing the flushing solution from the resin. After treatment, ion-exchange resin beads saturated with calcium and magnesium ions from the treated water, are regenerated by soaking in brine containing 6–12% NaCl. The sodium ions from brine replace the calcium and magnesium ions on the beads.
De-icing
In lower temperatures, a brine solution can be used to de-ice or reduce freezing temperatures on roads.
Quenching
Quenching is a heat-treatment process when forging metals such as steel. A brine solution, along with oil and other substances, is commonly used to harden steel. When brine is used, there is an enhanced uniformity of the cooling process and heat transfer.
Desalination
The desalination process consists of the separation of salts from an aqueous solution to obtain fresh water from a source of seawater or brackish water; and in turn, a discharge is generated, commonly called brine.
Characteristics
The characteristics of the discharge depend on different factors, such as the desalination technology used, salinity and quality of the water used, environmental and oceanographic characteristics, desalination process carried out, among others. The discharge of desalination plants by seawater reverse osmosis (SWRO), are mainly characterized by presenting a salinity concentration that can, in the worst case, double the salinity of the seawater used, and unlike of thermal desalination plants, have practically the same temperature and dissolved oxygen as the seawater used.
Dissolved chemicals
The discharge could contain trace chemical products used during the industrial treatments applies,such as antiscalants, coagulants, flocculants which are discarded together with the discharge, and which could affect the physical-chemical quality of the effluent. However, these are practically consumed during the process and the concentrations in the discharge are very low, which are practically diluted during the discharge, without affecting marine ecosystems.
Heavy metals
The materials used in SWRO plants are dominated by non-metallic components and stainless steels, since lower operating temperatures allow the construction of desalination plants with more corrosion-resistant coatings. Therefore, the concentration values of heavy metals in the discharge of SWRO plants are much lower than the acute toxicity levels to generate environmental impacts on marine ecosystems.
Discharge
The discharge is generally dumped back into the sea, through an underwater outfall or coastal release, due to its lower energy and economic cost compared to other discharge methods. Due to its increase in salinity, the discharge has a greater density compared to the surrounding seawater. Therefore, when the discharge reaches the sea, it can form a saline plume that can tends to follow the bathymetric line of the bottom until it is completely diluted. The distribution of the salt plume may depend on different factors, such as the production capacity of the plant, the discharge method, the oceanographic and environmental conditions of the discharge point, among others.
Marine environment
Brine discharge might lead to an increase in salinity above certain threshold levels that has the potential to affect benthic communities, especially those more sensitive to osmotic pressure, finally having an effect on their abundance and diversity.
However, if appropriate mitigation measures are applied, the potential environmental impacts of discharges from SWRO plants can be correctly minimized. Some examples can be found in countries such as Spain, Israel, Chile or Australia, in which the mitigation measures adopted reduce the area affected by the discharge, guaranteeing a sustainable development of the desalination process without significant impacts on marine ecosystems. When noticeable effects have been detected on the environment surrounding discharge areas, it generally corresponds to old desalination plants in which the correct mitigation measures were not implemented. Some examples can be found in Spain, Australia or Chile, where it has been shown that saline plumes do not exceed values of 5% with respect to the natural salinity of the sea in a radius less than 100 m from the point of discharge when proper measures are adopted.
Mitigation measures
The mitigation measures that are typically employed to prevent negatively impact sensitive marine environment are listed below:
A well-designed discharge mechanisms, employing the use of efficient diffusers or pre-dilution of discharges with seawater
An environmental evaluation study, which assesses the correct location of the discharge point, considering geomorphological and oceanographic variables, such as currents, bathymetry, and type of bottom, which favor a rapid mixing process of the discharges;
The implementation of an adequate environmental surveillance program, which guarantees the correct operation of the desalination plants during their operational phase, allowing an accurate and early diagnostics of potential environmental threats
Regulation
Currently, in many countries, such as Spain, Israel, Chile and Australia, the development of a rigorous environmental impact assessment process is required, both for the construction and operational phases. During its development, the most important legal management tools are established within the local environmental regulation, to prevent and adopt mitigation measures that guarantee the sustainable development of desalination projects. This includes a series of administrative tools and periodic environmental monitoring, to adopt preventive, corrective and further monitoring measures of the state of the surrounding marine environment.
Under the context of this environmental assessment process, numerous countries require compliance with an Environmental Monitoring Program (PVA), in order to evaluate the effectiveness of the preventive and corrective measures established during the environmental assessment process, and thus guarantee the operation of desalination plants without producing significant environmental impacts. The PVAs establishes a series of mandatory requirements that are mainly related to the monitoring of discharge, using a series of measurements and characterizations based on physical-chemical and biological information. In addition, the PVAs could also include different requirements related to monitoring the effects of seawater intake and those that may potentially be related to effects on the terrestrial environment.
Wastewater
Brine is a byproduct of many industrial processes, such as desalination, power plant cooling towers, produced water from oil and natural gas extraction, acid mine or acid rock drainage, reverse osmosis reject, chlor-alkali wastewater treatment, pulp and paper mill effluent, and waste streams from food and beverage processing. Along with diluted salts, it can contain residues of pretreatment and cleaning chemicals, their reaction byproducts and heavy metals due to corrosion.
Wastewater brine can pose a significant environmental hazard, both due to corrosive and sediment-forming effects of salts and toxicity of other chemicals diluted in it.
Unpolluted brine from desalination plants and cooling towers can be returned to the ocean. From the desalination process, reject brine is produced, which proposes potential damages to the marine life and habitats. To limit the environmental impact, it can be diluted with another stream of water, such as the outfall of a wastewater treatment or power plant. Since brine is heavier than seawater and would accumulate on the ocean bottom, it requires methods to ensure proper diffusion, such as installing underwater diffusers in the sewerage. Other methods include drying in evaporation ponds, injecting to deep wells, and storing and reusing the brine for irrigation, de-icing or dust control purposes.
Technologies for treatment of polluted brine include: membrane filtration processes, such as reverse osmosis and forward osmosis; ion exchange processes such as electrodialysis or weak acid cation exchange; or evaporation processes, such as thermal brine concentrators and crystallizers employing mechanical vapour recompression and steam. New methods for membrane brine concentration, employing osmotically assisted reverse osmosis and related processes, are beginning to gain ground as part of zero liquid discharge systems (ZLD).
Composition and purification
Brine consists of concentrated solution of Na+ and Cl− ions. Sodium chloride per se does not exist in water: it is fully ionized. Other cations found in various brines include K+, Mg2+, Ca2+, and Sr2+. The latter three are problematic because they form scale and they react with soaps. Aside from chloride, brines sometimes contain Br− and I− and, most problematically, . Purification steps often include the addition of calcium oxide to precipitate solid magnesium hydroxide together with gypsum (CaSO4), which can be removed by filtration. Further purification is achieved by fractional crystallization. The resulting purified salt is called evaporated salt or vacuum salt.
| Physical sciences | Water: General | Earth science |
70671 | https://en.wikipedia.org/wiki/Stress%E2%80%93energy%20tensor | Stress–energy tensor | The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.
Definition
The stress–energy tensor involves the use of superscripted variables ( exponents; see Tensor index notation and Einstein summation notation). If Cartesian coordinates in SI units are used, then the components of the position four-vector are given by: . In traditional Cartesian coordinates these are instead customarily written , where is coordinate time, and , , and are coordinate distances.
The stress–energy tensor is defined as the tensor of order two that gives the flux of the th component of the momentum vector across a surface with constant coordinate. In the theory of relativity, this momentum vector is taken as the four-momentum. In general relativity, the stress–energy tensor is symmetric,
In some alternative theories like Einstein–Cartan theory, the stress–energy tensor may not be perfectly symmetric because of a nonzero spin tensor, which geometrically corresponds to a nonzero torsion tensor.
Components
Because the stress–energy tensor is of order 2, its components can be displayed in matrix form:
where the indices and take on the values 0, 1, 2, 3.
In the following, and range from 1 through 3:
In solid state physics and fluid mechanics, the stress tensor is defined to be the spatial components of the stress–energy tensor in the proper frame of reference. In other words, the stress–energy tensor in engineering differs from the relativistic stress–energy tensor by a momentum-convective term.
Covariant and mixed forms
Most of this article works with the contravariant form, of the stress–energy tensor. However, it is often convenient to work with the covariant form,
or the mixed form,
This article uses the spacelike sign convention for the metric signature.
Conservation law
In special relativity
The stress–energy tensor is the conserved Noether current associated with spacetime translations.
The divergence of the non-gravitational stress–energy is zero. In other words, non-gravitational energy and momentum are conserved,
When gravity is negligible and using a Cartesian coordinate system for spacetime, this may be expressed in terms of partial derivatives as
The integral form of the non-covariant formulation is
where is any compact four-dimensional region of spacetime; is its boundary, a three-dimensional hypersurface; and is an element of the boundary regarded as the outward pointing normal.
In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show that angular momentum is also conserved:
In general relativity
When gravity is non-negligible or when using arbitrary coordinate systems, the divergence of the stress–energy still vanishes. But in this case, a coordinate-free definition of the divergence is used which incorporates the covariant derivative
where is the Christoffel symbol, which is the gravitational force field.
Consequently, if is any Killing vector field, then the conservation law associated with the symmetry generated by the Killing vector field may be expressed as
The integral form of this is
In special relativity
In special relativity, the stress–energy tensor contains information about the energy and momentum densities of a given system, in addition to the momentum and energy flux densities.
Given a Lagrangian density that is a function of a set of fields and their derivatives, but explicitly not of any of the spacetime coordinates, we can construct the canonical stress–energy tensor by looking at the total derivative with respect to one of the generalized coordinates of the system. So, with our condition
By using the chain rule, we then have
Written in useful shorthand,
Then, we can use the Euler–Lagrange Equation:
And then use the fact that partial derivatives commute so that we now have
We can recognize the right hand side as a product rule. Writing it as the derivative of a product of functions tells us that
Now, in flat space, one can write . Doing this and moving it to the other side of the equation tells us that
And upon regrouping terms,
This is to say that the divergence of the tensor in the brackets is 0. Indeed, with this, we define the stress–energy tensor:
By construction it has the property that
Note that this divergenceless property of this tensor is equivalent to four continuity equations. That is, fields have at least four sets of quantities that obey the continuity equation. As an example, it can be seen that is the energy density of the system and that it is thus possible to obtain the Hamiltonian density from the stress–energy tensor.
Indeed, since this is the case, observing that , we then have
We can then conclude that the terms of represent the energy flux density of the system.
Trace
The trace of the stress–energy tensor is defined to be , so
Since ,
In general relativity
In general relativity, the symmetric stress–energy tensor acts as the source of spacetime curvature, and is the current density associated with gauge transformations of gravity which are general curvilinear coordinate transformations. (If there is torsion, then the tensor is no longer symmetric. This corresponds to the case with a nonzero spin tensor in Einstein–Cartan gravity theory.)
In general relativity, the partial derivatives used in special relativity are replaced by covariant derivatives. What this means is that the continuity equation no longer implies that the non-gravitational energy and momentum expressed by the tensor are absolutely conserved, i.e. the gravitational field can do work on matter and vice versa. In the classical limit of Newtonian gravity, this has a simple interpretation: kinetic energy is being exchanged with gravitational potential energy, which is not included in the tensor, and momentum is being transferred through the field to other bodies. In general relativity the Landau–Lifshitz pseudotensor is a unique way to define the gravitational field energy and momentum densities. Any such stress–energy pseudotensor can be made to vanish locally by a coordinate transformation.
In curved spacetime, the spacelike integral now depends on the spacelike slice, in general. There is in fact no way to define a global energy–momentum vector in a general curved spacetime.
Einstein field equations
In general relativity, the stress–energy tensor is studied in the context of the Einstein field equations which are often written as
where is the Einstein tensor, is the Ricci tensor, is the scalar curvature, is the metric tensor, is the cosmological constant (negligible at the scale of a galaxy or smaller), and is the Einstein gravitational constant.
Stress–energy in special situations
Isolated particle
In special relativity, the stress–energy of a non-interacting particle with rest mass and trajectory is:
where is the velocity vector (which should not be confused with four-velocity, since it is missing a )
is the Dirac delta function and is the energy of the particle.
Written in language of classical physics, the stress–energy tensor would be (relativistic mass, momentum, the dyadic product of momentum and velocity)
Stress–energy of a fluid in equilibrium
For a perfect fluid in thermodynamic equilibrium, the stress–energy tensor takes on a particularly simple form
where is the mass–energy density (kilograms per cubic meter), is the hydrostatic pressure (pascals), is the fluid's four-velocity, and is the matrix inverse of the metric tensor. Therefore, the trace is given by
The four-velocity satisfies
In an inertial frame of reference comoving with the fluid, better known as the fluid's proper frame of reference, the four-velocity is
the matrix inverse of the metric tensor is simply
and the stress–energy tensor is a diagonal matrix
Electromagnetic stress–energy tensor
The Hilbert stress–energy tensor of a source-free electromagnetic field is
where is the electromagnetic field tensor.
Scalar field
The stress–energy tensor for a complex scalar field that satisfies the Klein–Gordon equation is
and when the metric is flat (Minkowski in Cartesian coordinates) its components work out to be:
Variant definitions of stress–energy
There are a number of inequivalent definitions of non-gravitational stress–energy:
Hilbert stress–energy tensor
The Hilbert stress–energy tensor is defined as the functional derivative
where is the nongravitational part of the action, is the nongravitational part of the Lagrangian density, and the Euler–Lagrange equation has been used. This is symmetric and gauge-invariant. See Einstein–Hilbert action for more information.
Canonical stress–energy tensor
Noether's theorem implies that there is a conserved current associated with translations through space and time; for details see the section above on the stress–energy tensor in special relativity. This is called the canonical stress–energy tensor. Generally, this is not symmetric and if we have some gauge theory, it may not be gauge invariant because space-dependent gauge transformations do not commute with spatial translations.
In general relativity, the translations are with respect to the coordinate system and as such, do not transform covariantly. See the section below on the gravitational stress–energy pseudotensor.
Belinfante–Rosenfeld stress–energy tensor
In the presence of spin or other intrinsic angular momentum, the canonical Noether stress–energy tensor fails to be symmetric. The Belinfante–Rosenfeld stress–energy tensor is constructed from the canonical stress–energy tensor and the spin current in such a way as to be symmetric and still conserved. In general relativity, this modified tensor agrees with the Hilbert stress–energy tensor.
Gravitational stress–energy
By the equivalence principle, gravitational stress–energy will always vanish locally at any chosen point in some chosen frame, therefore gravitational stress–energy cannot be expressed as a non-zero tensor; instead we have to use a pseudotensor.
In general relativity, there are many possible distinct definitions of the gravitational stress–energy–momentum pseudotensor. These include the Einstein pseudotensor and the Landau–Lifshitz pseudotensor. The Landau–Lifshitz pseudotensor can be reduced to zero at any event in spacetime by choosing an appropriate coordinate system.
| Physical sciences | Theory of relativity | Physics |
70674 | https://en.wikipedia.org/wiki/Opossum | Opossum | Opossums () are members of the marsupial order Didelphimorphia () endemic to the Americas. The largest order of marsupials in the Western Hemisphere, it comprises 126 species in 18 genera. Opossums originated in South America and entered North America in the Great American Interchange following the connection of North and South America in the late Cenozoic.
The Virginia opossum is the only species found in the United States and Canada. It is often simply referred to as an opossum, and in North America it is commonly referred to as a possum (; sometimes rendered as possum in written form to indicate the dropped "o"). The Australasian arboreal marsupials of suborder Phalangeriformes are also called possums because of their resemblance to opossums, but they belong to a different order. The opossum is typically a nonaggressive animal and almost never carries the virus that causes rabies.
Etymology
The word opossum is derived from the Powhatan language and was first recorded between 1607 and 1611 by John Smith (as opassom) and William Strachey (as aposoum). Possum was first recorded in 1613. Both men encountered the language at the English settlement of Jamestown, Virginia, which Smith helped to found and where Strachey later served as its first secretary. Strachey's notes describe the opossum as a "beast in bigness of a pig and in taste alike," while Smith recorded it "hath an head like a swine ... tail like a rat ... of the bigness of a cat." The Powhatan word ultimately derives from a Proto-Algonquian word (*wa·p-aʔθemwa) meaning "white dog or dog-like beast."
Following the arrival of Europeans in Australia, the term possum was borrowed to describe distantly related Australian marsupials of the suborder Phalangeriformes, which are more closely related to other Australian marsupials such as kangaroos.
Didelphimorphia comes from the Ancient Greek words for "two" (di) and "wombs" (delphus).
Evolution
Opossums are often considered to be "living fossils", and as a result they are often used to approximate the ancestral therian condition in comparative studies. But this is a mistake, because the oldest opossum fossils are from a more recent epoch, the early Miocene (roughly 20 million years ago). The last common ancestor of all living opossums dates approximately to the Oligocene-Miocene boundary (23 million years ago) and is at most no older than Oligocene in age. Many extinct metatherians, such as Alphadon, Peradectes, Herpetotherium, and Pucadelphys, were once considered to be early opossums, but it has since been recognized that this was solely on the basis of plesiomorphies; they are now considered to belong to older branches of Metatheria that are only distantly related to modern opossums.
Opossums probably originated in the Amazonia region of northern South America, where they began their initial diversification. They were minor components of South American mammal faunas until the late Miocene, when they began to diversify rapidly. Before that time, the ecological niches presently occupied by opossums were occupied by other groups of metatherians such as paucituberculatans and sparassodonts.
Large opossums like Didelphis show a pattern of gradually increasing in size over geologic time as sparassodont diversity declined. Several groups of opossums, including Thylophorops, Thylatheridium, Hyperdidelphys, and sparassocynids developed carnivorous adaptations during the late Miocene-Pliocene, before the arrival of carnivorans in South America. Most of these groups, with the exception of Lutreolina, are now extinct. It has been suggested that the size and shape of the ancestral didelphid's jaw would most closely match that of the modern Marmosa genus.
Characteristics
Didelphimorphs are small to medium-sized marsupials that grow to the size of a house cat. They tend to be semi-arboreal omnivores, although there are many exceptions. Most members of this order have long snouts, a narrow braincase, and a prominent sagittal crest. The dental formula is: teeth. By mammalian standards, this is an unusually full jaw. The incisors are very small, the canines large, and the molars are tricuspid.
Didelphimorphs have a plantigrade stance (feet flat on the ground) and the hind feet have an opposable digit with no claw. Like some New World monkeys, some opossums have prehensile tails. Like most marsupials, many females have a pouch. The tail and parts of the feet bear scutes. The stomach is simple, with a small cecum. Like most marsupials, the male opossum has a forked penis bearing twin glandes.
Although all living opossums are essentially opportunistic omnivores, different species vary in the amount of meat and vegetation they include in their diet. Members of the Caluromyinae are essentially frugivorous; whereas the lutrine opossum and Patagonian opossum primarily feed on other animals. The water opossum or yapok (Chironectes minimus) is particularly unusual, as it is the only living semi-aquatic marsupial, using its webbed hindlimbs to dive in search of freshwater mollusks and crayfish. The extinct Thylophorops, the largest known opossum at , was a macropredator. Most opossums are scansorial, well-adapted to life in the trees or on the ground, but members of the Caluromyinae and Glironiinae are primarily arboreal, whereas species of Metachirus, Monodelphis, and to a lesser degree Didelphis show adaptations for life on the ground. Metachirus nudicaudatus, found in the upper Amazon basin, consumes fruit seeds, small vertebrate creatures like birds and reptiles and invertebrates like crayfish and snails, but seems to be mainly insectivorous.
Reproduction and life cycle
As marsupials, female opossums have a reproductive system that includes a bifurcated vagina and a divided uterus; many have a pouch. The average estrous cycle of the Virginia opossum is about 28 days. Opossums do possess a placenta, but it is short-lived, simple in structure, and, unlike that of placental mammals, not fully functional. The young are therefore born at a very early stage, although the gestation period is similar to that of many other small marsupials, at only 12 to 14 days. They give birth to litters of up to 20 young. Once born, the offspring must find their way into the marsupium, if present, to hold on to and nurse from a teat. Baby opossums, like their Australian cousins, are called joeys. Female opossums often give birth to very large numbers of young, most of which fail to attach to a teat, although as many as 13 young can attach, and therefore survive, depending on species. The young are weaned between 70 and 125 days, when they detach from the teat and leave the pouch. The opossum lifespan is unusually short for a mammal of its size, usually only one to two years in the wild and as long as four or more years in captivity. Senescence is rapid.
Opossums are moderately sexually dimorphic with males usually being larger, heavier, and having larger canines than females. The largest difference between the opossum and non-marsupial mammals is the bifurcated penis of the male and bifurcated vagina of the female (the source of the term didelphimorph, from the Greek didelphys, meaning "double-wombed"). Opossum spermatozoa exhibit sperm-pairing, forming conjugate pairs in the epididymis. This may ensure that flagella movement can be accurately coordinated for maximal motility. Conjugate pairs dissociate into separate spermatozoa before fertilization.
Behavior
Opossums are usually solitary and nomadic, staying in one area as long as food and water are easily available. Some families will group together in ready-made burrows or even under houses. Though they will temporarily occupy abandoned burrows, they do not dig or put much effort into building their own. As nocturnal animals, they favor dark, secure areas. These areas may be below ground or above.
When threatened or harmed, they will "play possum", mimicking the appearance and smell of a sick or dead animal. This physiological response is involuntary (like fainting), rather than a conscious act. In the case of baby opossums, however, the brain does not always react this way at the appropriate moment, and therefore they often fail to "play dead" when threatened. When an opossum is "playing possum", the animal's lips are drawn back, the teeth are bared, saliva foams around the mouth, the eyes close or half-close, and a foul-smelling fluid is secreted from the anal glands. The stiff, curled form can be prodded, turned over, and even carried away without reaction. The animal will typically regain consciousness after a period of a few minutes to four hours, a process that begins with a slight twitching of the ears.
Some species of opossums have prehensile tails, although dangling by the tail is more common among juveniles. An opossum may also use its tail as a brace and a fifth limb when climbing. The tail is occasionally used as a grip to carry bunches of leaves or bedding materials to the nest. A mother will sometimes carry her young upon her back, where they will cling tightly even when she is climbing or running.
Threatened opossums (especially males) will growl deeply, raising their pitch as the threat becomes more urgent. Males make a clicking "smack" noise out of the side of their mouths as they wander in search of a mate, and females will sometimes repeat the sound in return. When separated or distressed, baby opossums will make a sneezing noise to signal their mother. The mother in return makes a clicking sound and waits for the baby to find her. If threatened, the baby will open its mouth and quietly hiss until the threat is gone.
Diet
Opossums eat insects, rodents, birds, eggs, frogs, plants, fruits and grain. Some species may eat the skeletal remains of rodents and roadkill animals to fulfill their calcium requirements. In captivity, opossums will eat practically anything including dog and cat food, livestock fodder and discarded human food scraps and waste.
Many large opossums (Didelphini) are immune to the venom of rattlesnakes and pit vipers (Crotalinae) and regularly prey upon these snakes. This adaptation seems to be unique to the Didelphini, as their closest relative, the brown four-eyed opossum, is not immune to snake venom. Similar adaptations are seen in other small predatory mammals such as mongooses and hedgehogs. Didelphin opossums and crotaline vipers have been suggested to be in an evolutionary arms race. Some authors have suggested that this adaptation originally arose as a defense mechanism, allowing a rare reversal of an evolutionary arms race where the former prey has become the predator, whereas others have suggested it arose as a predatory adaptation given that it also occurs in other predatory mammals and does not occur in opossums that do not regularly eat other vertebrates. The fer-de-lance, one of the most venomous snakes in the New World, may have developed its highly potent venom as a means to prey on or a defense mechanism against large opossums.
Habitat
Opossums are found in North, Central, and South America. The Virginia opossum lives in regions as far north as Canada and as far south as Central America, while other types of opossums only inhabit countries south of the United States. The Virginia opossum can often be found in wooded areas, though its habitat may vary widely. Opossums are generally found in areas like forests, shrubland, mangrove swamps, rainforests and eucalyptus forests. Opossums have been found moving northward.
Hunting and foodways
Until the early 20th century, the Virginia opossum was widely hunted and consumed in the United States. Opossum farms have been operated in the United States in the past. Sweet potatoes were eaten together with the opossum in the American South. In 1909, a "Possum and 'Taters" banquet was held in Atlanta to honor President-elect William Howard Taft. South Carolina cuisine includes opossum, and President Jimmy Carter hunted opossums in addition to other small game.
In Dominica, Grenada, Trinidad, Saint Lucia and Saint Vincent and the Grenadines, the common opossum or manicou is popular and can only be hunted during certain times of the year owing to overhunting. The meat is traditionally prepared by smoking, then stewing. It is light and fine-grained, but the musk glands must be removed as part of preparation. The meat can be used in place of rabbit and chicken in recipes. Historically, hunters in the Caribbean would place a barrel with fresh or rotten fruit to attract opossums that would feed on the fruit or insects.
In northern/central Mexico, opossums are known as tlacuache or tlacuatzin. Their tails are eaten as a folk remedy to improve fertility. In the Yucatán peninsula they are known in the Yucatec Mayan language as "och" and they are not considered part of the regular diet by Mayan people, but still considered edible in times of famine.
Opossum oil (possum grease) is high in essential fatty acids and has been used as a chest rub and a carrier for arthritis remedies given as salves.
Opossum pelts have long been part of the fur trade.
Classification
Classification based on Voss (2022), species based on the American Society of Mammalogists (2023)
Family Didelphidae
Subfamily Glironiinae
Genus Glironia
Bushy-tailed opossum (Glironia venusta)
Subfamily Caluromyinae
Genus Caluromys
Subgenus Caluromys
Bare-tailed woolly opossum (Caluromys philander)
Subgenus Mallodelphys
Derby's woolly opossum (Caluromys derbianus)
Brown-eared woolly opossum (Caluromys lanatus)
Genus Caluromysiops
Black-shouldered opossum (Caluromysiops irrupta)
Subfamily Hyladelphinae
Genus Hyladelphys
Kalinowski's mouse opossum (Hyladelphys kalinowskii)
Genus †Sairadelphys Oliveira et al. 2011
†Sairadelphys tocantinensis Oliveira et al. 2011
Subfamily Didelphinae
Tribe Metachirini
Genus Metachirus
Aritana's brown four-eyed opossum (Metachirus aritanai)
Common brown four-eyed opossum (Metachirus myosuros)
Guianan brown four-eyed opossum (Metachirus nudicaudatus)
Tribe Didelphini
Genus Chironectes
Water opossum or yapok (Chironectes minimus)
Genus Lutreolina
†Lutreolina biforata (Ameghino 1904) Goin & Pardiñas 1996
Big lutrine opossum or little water opossum (Lutreolina crassicaudata)
†Lutreolina materdei Goin & De los Reyes 2011
Massoia's lutrine opossum (Lutreolina massoia)
†Lutreolina tracheia Rovereto 1914
†Genus Hyperdidelphys Ameghino 1904
†Hyperdidelphys dimartinoi Goin & Pardiñas 1996
†Hyperdidelphys inexpectata (Ameghino 1889) Marshall 1982
†Hyperdidelphys parvula Ameghino 1904
†Hyperdidelphys pattersoni (Reig 1952) Marshall 1982
Genus Didelphis
White-eared opossum (Didelphis albiventris)
Big-eared opossum (Didelphis aurita)
Guianan white-eared opossum (Didelphis imperfecta)
Common opossum (Didelphis marsupialis)
Andean white-eared opossum (Didelphis pernigra)
†Didelphis solimoensis
Virginia opossum (Didelphis virginiana)
Genus Philander
Anderson's four-eyed opossum (Philander andersoni)
Common four-eyed opossum (Philander canus)
Deltaic four-eyed opossum (Philander deltae)
Southeastern four-eyed opossum (Philander frenatus)
McIlhenny's four-eyed opossum (Philander mcilhennyi)
Dark four-eyed opossum (Philander melanurus)
Mondolfi's four-eyed opossum (Philander mondolfii)
Black four-eyed opossum (Philander nigratus)
Olrog's four-eyed opossum (Philander olrogi)
Gray four-eyed opossum (Philander opossum)
Pebas four-eyed opossum (Philander pebas)
Southern four-eyed opossum (Philander quica)
Northern four-eyed opossum (Philander vossi)
†Genus Thylophorops Reig 1952
†Thylophorops chapadmalensis Reig 1952
†Thylophorops lorenzinii Goin et al. 2009
†Thylophorops perplana (Ameghino 1904) Goin & Pardiñas 1996
Tribe Marmosini
Genus †Hesperocynus Forasiepi et al. 2009
†Hesperocynus dolgopolae (Reig 1952) Forasiepi et al. 2009
Genus Marmosa
†Marmosa contrerasi Mones 1980
Subgenus Eomarmosa
Red mouse opossum (Marmosa rubra)
Subgenus Exulomarmosa
Isthmian mouse opossum (Marmosa isthmica)
Mexican mouse opossum (Marmosa mexicana)
Robinson's mouse opossum (Marmosa robinsoni)
Simon's mouse opossum (Marmosa simonsi)
Guajira mouse opossum (Marmosa xerophila)
Zeledon's mouse opossum (Marmosa zeledoni)
Subgenus Marmosa
Quechuan mouse opossum (Marmosa macrotarsus)
Linnaeus's mouse opossum (Marmosa murina)
Tyler's mouse opossum (Marmosa tyleriana)
Waterhouse's mouse opossum (Marmosa waterhousei)
Subgenus Micoureus
Adler's mouse opossum (Marmosa adleri)
Alston's woolly mouse opossum (Marmosa alstoni)
White-bellied woolly mouse opossum (Marmosa constantiae)
Northeastern woolly mouse opossum (Marmosa demerarae)
Northwestern woolly mouse opossum (Marmosa germana)
Jansa's woolly mouse opossum (Marmosa jansae)
†Marmosa laventica Marshall 1976
Brazilian woolly mouse opossum (Marmosa limae)
Merida woolly mouse opossum (Marmosa meridae)
Nicaraguan woolly mouse opossum (Marmosa nicaraguae)
Tate's woolly mouse opossum (Marmosa paraguayana)
Peruvian woolly mouse opossum (Marmosa parda)
Anthony's woolly mouse opossum (Marmosa perplexa)
Little woolly mouse opossum (Marmosa phaea)
Bolivian woolly mouse opossum (Marmosa rapposa)
Bare-tailed woolly mouse opossum (Marmosa rutteri)
Subgenus Stegomarmosa
Heavy-browed mouse opossum (Marmosa andersoni)
Rufous mouse opossum (Marmosa lepida)
Genus Monodelphis
Subgenus Microdelphys
Northern three-striped opossum (Monodelphis americana)
Gardner's short-tailed opossum (Monodelphis gardneri)
Ihering's three-striped opossum (Monodelphis iheringi)
Chestnut-striped opossum (Monodelphis rubida)
Long-nosed short-tailed opossum (Monodelphis scalops)
Southern three-striped opossum (Monodelphis theresa)
Red three-striped opossum (Monodelphis umbristriata)
Subgenus Monodelphiops
Yellow-sided opossum (Monodelphis dimidiata)
Southern red-sided opossum (Monodelphis sorex)
One-striped opossum (Monodelphis unistriata)
Subgenus Monodelphis
Arlindo's short-tailed opossum (Monodelphis arlindoi)
Northern red-sided opossum (Monodelphis brevicaudata)
Gray short-tailed opossum (Monodelphis domestica)
Amazonian red-sided opossum (Monodelphis glirina)
Marajó short-tailed opossum (Monodelphis maraxina)
Hooded red-sided opossum (Monodelphis palliolata)
Santa Rosa short-tailed opossum (Monodelphis sanctaerosae)
Touan short-tailed opossum (Monodelphis touan)
Voss's short-tailed opossum (Monodelphis vossi)
Subgenus Mygalodelphys
Sepia short-tailed opossum (Monodelphis adusta)
Handley's short-tailed opossum (Monodelphis handleyi)
Pygmy short-tailed opossum (Monodelphis kunsi)
Osgood's short-tailed opossum (Monodelphis osgoodi)
Peruvian short-tailed opossum (Monodelphis peruviana)
Long-nosed short-tailed opossum (Monodelphis pinocchio)
Reig's opossum (Monodelphis reigi)
Ronald's opossum (Monodelphis ronaldi)
Saci short-tailed opossum (Monodelphis saci)
Subgenus Pyrodelphys
Emilia's short-tailed opossum (Monodelphis emiliae)
Genus †Sparassocynus Mercerat 1898
†Sparassocynus bahiai Mercerat 1898
†Sparassocynus derivatus Reig & Simpson 1972
†Sparassocynus maimarai Abello et al. 2015
†Sparassocynus heterotopicus Villarroel & Marshall 1983
Genus †Thylatheridium Reig 1952
†Thylatheridium cristatum Reig 1952
†Thylatheridium hudsoni Goin & Montalvo 1988
†Thylatheridium pascuali Reig 1958
Genus Tlacuatzin
Balsas gray mouse opossum (Tlacuatzin balsasensis)
Tehuantepec gray mouse opossum (Tlacuatzin canescens)
Yucatan gray mouse opossum (Tlacuatzin gaumeri)
Tres Marías gray mouse opossum (Tlacuatzin insularis)
Northern gray mouse opossum (Tlacuatzin sinaloae)
†Genus Zygolestes Ameghino 1898
†Zygolestes paramensis Ameghino 1898
†Zygolestes tatei Goin, Montalvo & Visconti 2000
Tribe Thylamyini
Genus Chacodelphys
Chacoan pygmy opossum (Chacodelphys formosa)
Genus Cryptonanus
Agricola's gracile opossum (Cryptonanus agricolai)
Chacoan gracile opossum (Cryptonanus chacoensis)
Guahiba gracile opossum (Cryptonanus guahybae)
†Red-bellied gracile opossum (Cryptonanus ignitus)
Unduavi gracile opossum (Cryptonanus unduaviensis)
Genus Gracilinanus
Aceramarca gracile opossum (Gracilinanus aceramarcae)
Agile gracile opossum (Gracilinanus agilis)
Wood sprite gracile opossum (Gracilinanus dryas)
Emilia's gracile opossum (Gracilinanus emilae)
Northern gracile opossum (Gracilinanus marica)
Brazilian gracile opossum (Gracilinanus microtarsus)
Peruvian opossum (Gracilinanus peruanus)
Genus Lestodelphys
Patagonian opossum (Lestodelphys halli)
†Lestodelphys juga (Ameghino 1889)
Genus Marmosops
Subgenus Marmosops
Andean slender mouse opossum (Marmosops caucae)
Creighton's slender opossum (Marmosops creightoni)
Dorothy's slender opossum (Marmosops dorothea)
Tschudi's slender opossum (Marmosops impavidus)
Gray slender opossum (Marmosops incanus)
Neblina slender opossum (Marmosops neblina)
White-bellied slender opossum (Marmosops noctivagus)
Spectacled slender opossum (Marmosops ocellatus)
Brazilian slender opossum (Marmosops paulensis)
Soini's slender opossum (Marmosops soinii)
Subgenus Sciophanes
Bishop's slender opossum (Marmosops bishopi)
Carr's slender opossum (Marmosops carri)
Cordillera slender opossum (Marmosops chucha)
Narrow-headed slender opossum (Marmosops cracens)
Dusky slender opossum (Marmosops fuscatus)
Handley's slender opossum (Marmosops handleyi)
Panama slender opossum (Marmosops invictus)
Junin slender opossum (Marmosops juninensis)
Río Magdalena slender opossum (Marmosops magdalenae)
Silva's slender opossum (Marmosops marina)
Ojasti's slender opossum (Marmosops ojastii)
Pantepui slender opossum (Marmosops pakaraimae)
Delicate slender opossum (Marmosops parvidens)
Pinheiro's slender opossum (Marmosops pinheiroi)
Woodall's slender opossum (Marmosops woodalli)
Genus Thylamys
Subgenus Thylamys
Cinderella fat-tailed mouse opossum (Thylamys cinderella)
Mesopotamian fat-tailed mouse opossum (Thylamys citellus)
Elegant fat-tailed mouse opossum (Thylamys elegans)
Paraguayan fat-tailed mouse opossum (Thylamys macrurus)
White-bellied fat-tailed mouse opossum (Thylamys pallidior)
Dry Chaco fat-tailed mouse opossum (Thylamys pulchellus)
Chacoan fat-tailed mouse opossum (Thylamys pusillus)
Argentine fat-tailed mouse opossum (Thylamys sponsorius)
Tate's fat-tailed mouse opossum (Thylamys tatei)
Buff-bellied fat-tailed mouse opossum (Thylamys venustus)
Subgenus Xerodelpys
Karimi's fat-tailed mouse opossum (Thylamys karimii)
Dwarf fat-tailed mouse opossum (Thylamys velutinus)
†Thylamys colombianus Goin 1997
†Thylamys minutus Goin 1997
†Thylamys pinei Goin, Montalvo & Visconti 2000
†Thylamys zettii Goin 1997
| Biology and health sciences | Marsupials | null |
70807 | https://en.wikipedia.org/wiki/Thunderstorm | Thunderstorm | A thunderstorm, also known as an electrical storm or a lightning storm, is a storm characterized by the presence of lightning and its acoustic effect on the Earth's atmosphere, known as thunder. Relatively weak thunderstorms are sometimes called thundershowers. Thunderstorms occur in a type of cloud known as a cumulonimbus. They are usually accompanied by strong winds and often produce heavy rain and sometimes snow, sleet, or hail, but some thunderstorms produce little precipitation or no precipitation at all. Thunderstorms may line up in a series or become a rainband, known as a squall line. Strong or severe thunderstorms include some of the most dangerous weather phenomena, including large hail, strong winds, and tornadoes. Some of the most persistent severe thunderstorms, known as supercells, rotate as do cyclones. While most thunderstorms move with the mean wind flow through the layer of the troposphere that they occupy, vertical wind shear sometimes causes a deviation in their course at a right angle to the wind shear direction.
Thunderstorms result from the rapid upward movement of warm, moist air, sometimes along a front. However, some kind of cloud forcing, whether it is a front, shortwave trough, or another system is needed for the air to rapidly accelerate upward. As the warm, moist air moves upward, it cools, condenses, and forms a cumulonimbus cloud that can reach heights of over . As the rising air reaches its dew point temperature, water vapor condenses into water droplets or ice, reducing pressure locally within the thunderstorm cell. Any precipitation falls the long distance through the clouds towards the Earth's surface. As the droplets fall, they collide with other droplets and become larger. The falling droplets create a downdraft as it pulls cold air with it, and this cold air spreads out at the Earth's surface, occasionally causing strong winds that are commonly associated with thunderstorms.
Thunderstorms can form and develop in any geographic location but most frequently within the mid-latitude, where warm, moist air from tropical latitudes collides with cooler air from polar latitudes. Thunderstorms are responsible for the development and formation of many severe weather phenomena, which can be potentially hazardous. Damage that results from thunderstorms is mainly inflicted by downburst winds, large hailstones, and flash flooding caused by heavy precipitation. Stronger thunderstorm cells are capable of producing tornadoes and waterspouts.
There are three types of thunderstorms: single-cell, multi-cell, and supercell. Supercell thunderstorms are the strongest and most severe. Mesoscale convective systems formed by favorable vertical wind shear within the tropics and subtropics can be responsible for the development of hurricanes. Dry thunderstorms, with no precipitation, can cause the outbreak of wildfires from the heat generated from the cloud-to-ground lightning that accompanies them. Several means are used to study thunderstorms: weather radar, weather stations, and video photography. Past civilizations held various myths concerning thunderstorms and their development as late as the 18th century. Beyond the Earth's atmosphere, thunderstorms have also been observed on the planets of Jupiter, Saturn, Neptune, and, probably, Venus.
Life cycle
Warm air has a lower density than cool air, so warmer air rises upwards and cooler air will settle at the bottom (this effect can be seen with a hot air balloon). Clouds form as relatively warmer air, carrying moisture, rises within cooler air. The moist air rises, and, as it does so, it cools and some of the water vapor in that rising air condenses. When the moisture condenses, it releases energy known as latent heat of condensation, which allows the rising packet of air to cool less than the cooler surrounding air continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form and produce lightning and thunder. Meteorological indices such as convective available potential energy (CAPE) and the lifted index can be used to assist in determining potential upward vertical development of clouds. Generally, thunderstorms require three conditions in order to form:
Moisture
An unstable airmass
A lifting force (heat)
All thunderstorms, regardless of type, go through three stages: the developing stage, the mature stage, and the dissipation stage. The average thunderstorm has a diameter. Depending on the conditions present in the atmosphere, each of these three stages take an average of 30 minutes.
Developing stage
The first stage of a thunderstorm is the cumulus stage or developing stage. During this stage, masses of moisture are lifted upwards into the atmosphere. The trigger for this lift can be solar illumination, where the heating of the ground produces thermals, or where two winds converge forcing air upwards, or where winds blow over terrain of increasing elevation. The moisture carried upward cools into liquid drops of water due to lower temperatures at high altitude, which appear as cumulus clouds. As the water vapor condenses into liquid, latent heat is released, which warms the air, causing it to become less dense than the surrounding, drier air. The air tends to rise in an updraft through the process of convection (hence the term convective precipitation). This process creates a low-pressure zone within and beneath the forming thunderstorm. In a typical thunderstorm, approximately 500 million kilograms of water vapor are lifted into the Earth's atmosphere.
Mature stage
In the mature stage of a thunderstorm, the warmed air continues to rise until it reaches an area of warmer air and can rise no farther. Often this 'cap' is the tropopause. The air is instead forced to spread out, giving the storm a characteristic anvil shape. The resulting cloud is called cumulonimbus incus. The water droplets coalesce into larger and heavier droplets and freeze to become ice particles. As these fall, they melt to become rain. If the updraft is strong enough, the droplets are held aloft long enough to become so large that they do not melt completely but fall as hail. While updrafts are still present, the falling rain drags the surrounding air with it, creating downdrafts as well. The simultaneous presence of both an updraft and a downdraft marks the mature stage of the storm and produces cumulonimbus clouds. During this stage, considerable internal turbulence can occur, which manifests as strong winds, severe lightning, and even tornadoes.
Typically, if there is little wind shear, the storm will rapidly enter the dissipating stage and 'rain itself out', but, if there is sufficient change in wind speed or direction, the downdraft will be separated from the updraft, and the storm may become a supercell, where the mature stage can sustain itself for several hours.
Dissipating stage
In the dissipation stage, the thunderstorm is dominated by the downdraft. If atmospheric conditions do not support super cellular development, this stage occurs rather quickly, approximately 20–30 minutes into the life of the thunderstorm. The downdraft will push down out of the thunderstorm, hit the ground and spread out. This phenomenon is known as a downburst. The cool air carried to the ground by the downdraft cuts off the inflow of the thunderstorm, the updraft disappears and the thunderstorm will dissipate. Thunderstorms in an atmosphere with virtually no vertical wind shear weaken as soon as they send out an outflow boundary in all directions, which then quickly cuts off its inflow of relatively warm, moist air, and kills the thunderstorm's further growth. The downdraft hitting the ground creates an outflow boundary. This can cause downbursts, a potential hazardous condition for aircraft to fly through, as a substantial change in wind speed and direction occurs, resulting in a decrease of airspeed and the subsequent reduction in lift for the aircraft. The stronger the outflow boundary is, the stronger the resultant vertical wind shear becomes.
Classification
There are four main types of thunderstorms: single-cell, multi-cell, squall line (also called multi-cell line) and supercell. Which type forms depends on the instability and relative wind conditions at different layers of the atmosphere ("wind shear"). Single-cell thunderstorms form in environments of low vertical wind shear and last only 20–30 minutes.
Organized thunderstorms and thunderstorm clusters/lines can have longer life cycles as they form in environments of significant vertical wind shear, normally greater than in the lowest of the troposphere, which aids the development of stronger updrafts as well as various forms of severe weather. The supercell is the strongest of the thunderstorms, most commonly associated with large hail, high winds, and tornado formation. Precipitable water values of greater than favor the development of organized thunderstorm complexes. Those with heavy rainfall normally have precipitable water values greater than . Upstream values of CAPE of greater than 800 J/kg are usually required for the development of organized convection.
Single-cell
This term technically applies to a single thunderstorm with one main updraft. Also known as air-mass thunderstorms, these are the typical summer thunderstorms in many temperate locales. They also occur in the cool unstable air that often follows the passage of a cold front from the sea during winter. Within a cluster of thunderstorms, the term "cell" refers to each separate principal updraft. Thunderstorm cells occasionally form in isolation, as the occurrence of one thunderstorm can develop an outflow boundary that sets up new thunderstorm development. Such storms are rarely severe and are a result of local atmospheric instability; hence the term "air mass thunderstorm". When such storms have a brief period of severe weather associated with them, it is known as a pulse severe storm. Pulse severe storms are poorly organized and occur randomly in time and space, making them difficult to forecast. Single-cell thunderstorms normally last 20–30 minutes.
Multi-cell clusters
This is the most common type of thunderstorm development. Mature thunderstorms are found near the center of the cluster, while dissipating thunderstorms exist on their downwind side. Multicell storms form as clusters of storms but may then evolve into one or more squall lines. While each cell of the cluster may only last 20 minutes, the cluster itself may persist for hours at a time. They often arise from convective updrafts in or near mountain ranges and linear weather boundaries, such as strong cold fronts or troughs of low pressure. These type of storms are stronger than the single-cell storm, yet much weaker than the supercell storm. Hazards with the multicell cluster include moderate-sized hail, flash flooding, and weak tornadoes.
Multicell lines
A squall line is an elongated line of severe thunderstorms that can form along or ahead of a cold front. In the early 20th century, the term was used as a synonym for cold front. The squall line contains heavy precipitation, hail, frequent lightning, strong straight line winds, and possibly tornadoes and waterspouts. Severe weather in the form of strong straight-line winds can be expected in areas where the squall line itself is in the shape of a bow echo, within the portion of the line that bows out the most. Tornadoes can be found along waves within a line echo wave pattern, or LEWP, where mesoscale low pressure areas are present. Some bow echoes in the summer are called derechos, and move quite fast through large sections of territory. On the back edge of the rain shield associated with mature squall lines, a wake low can form, which is a mesoscale low pressure area that forms behind the mesoscale high pressure system normally present under the rain canopy, which are sometimes associated with a heat burst. This kind of storm is also known as "Wind of the Stony Lake" (; shi2 hu2 feng1) in southern China.
Supercells
Supercell storms are large, usually severe, quasi-steady-state storms that form in an environment where wind speed or wind direction varies with height ("wind shear"), and they have separate downdrafts and updrafts (i.e., where its associated precipitation is not falling through the updraft) with a strong, rotating updraft (a "mesocyclone"). These storms normally have such powerful updrafts that the top of the supercell storm cloud (or anvil) can break through the troposphere and reach into the lower levels of the stratosphere. Supercell storms can be wide. Research has shown that at least 90 percent of supercells cause severe weather. These storms can produce destructive tornadoes, extremely large hailstones ( diameter), straight-line winds in excess of , and flash floods. In fact, research has shown that most tornadoes occur from this type of thunderstorm. Supercells are generally the strongest type of thunderstorm.
Severe thunderstorms
In the United States, a thunderstorm is classed as severe if winds reach at least , hail is in diameter or larger, or if funnel clouds or tornadoes are reported. Although a funnel cloud or tornado indicates a severe thunderstorm, a tornado warning is issued in place of a severe thunderstorm warning. A severe thunderstorm warning is issued if a thunderstorm becomes severe, or will soon turn severe. In Canada, a rainfall rate greater than in one hour, or in three hours, is also used to indicate severe thunderstorms. Severe thunderstorms can occur from any type of storm cell. However, multicell, supercell, and squall lines represent the most common forms of thunderstorms that produce severe weather.
Mesoscale convective systems
A mesoscale convective system (MCS) is a complex of thunderstorms that becomes organized on a scale larger than the individual thunderstorms but smaller than extratropical cyclones, and normally persists for several hours or more. A mesoscale convective system's overall cloud and precipitation pattern may be round or linear in shape, and include weather systems such as tropical cyclones, squall lines, lake-effect snow events, polar lows, and mesoscale convective complexes (MCCs), and they generally form near weather fronts. Most mesoscale convective systems develop overnight and continue their lifespan through the next day. They tend to form when the surface temperature varies by more than between day and night. The type that forms during the warm season over land has been noted across North America, Europe, and Asia, with a maximum in activity noted during the late afternoon and evening hours.
Forms of MCS that develop in the tropics are found in use either the Intertropical Convergence Zone or monsoon troughs, generally within the warm season between spring and fall. More intense systems form over land than over water. One exception is that of lake-effect snow bands, which form due to cold air moving across relatively warm bodies of water, and occurs from fall through spring. Polar lows are a second special class of MCS. They form at high latitudes during the cold season. Once the parent MCS dies, later thunderstorm development can occur in connection with its remnant mesoscale convective vortex (MCV). Mesoscale convective systems are important to the United States rainfall climatology over the Great Plains since they bring the region about half of their annual warm season rainfall.
Motion
The two major ways thunderstorms move are via advection of the wind and propagation along outflow boundaries towards sources of greater heat and moisture. Many thunderstorms move with the mean wind speed through the Earth's troposphere, the lowest of the Earth's atmosphere. Weaker thunderstorms are steered by winds closer to the Earth's surface than stronger thunderstorms, as the weaker thunderstorms are not as tall. Organized, long-lived thunderstorm cells and complexes move at a right angle to the direction of the vertical wind shear vector. If the gust front, or leading edge of the outflow boundary, races ahead of the thunderstorm, its motion will accelerate in tandem. This is more of a factor with thunderstorms with heavy precipitation (HP) than with thunderstorms with low precipitation (LP). When thunderstorms merge, which is most likely when numerous thunderstorms exist in proximity to each other, the motion of the stronger thunderstorm normally dictates the future motion of the merged cell. The stronger the mean wind, the less likely other processes will be involved in storm motion. On weather radar, storms are tracked by using a prominent feature and tracking it from scan to scan.
Back-building thunderstorm
A back-building thunderstorm, commonly referred to as a training thunderstorm, is a thunderstorm in which new development takes place on the upwind side (usually the west or southwest side in the Northern Hemisphere), such that the storm seems to remain stationary or propagate in a backward direction. Though the storm often appears stationary on radar, or even moving upwind, this is an illusion. The storm is really a multi-cell storm with new, more vigorous cells that form on the upwind side, replacing older cells that continue to drift downwind. When this happens, catastrophic flooding is possible. In Rapid City, South Dakota, in 1972, an unusual alignment of winds at various levels of the atmosphere combined to produce a continuously training set of cells that dropped an enormous quantity of rain upon the same area, resulting in devastating flash flooding. A similar event occurred in Boscastle, England, on 16 August 2004, and over Chennai on 1 December 2015.
Hazards
Each year, many people are killed or seriously injured by severe thunderstorms despite the advance warning. While severe thunderstorms are most common in the spring and summer, they can occur at just about any time of the year.
Cloud-to-ground lightning
Cloud-to-ground lightning frequently occurs within the phenomena of thunderstorms and have numerous hazards towards landscapes and populations. One of the more significant hazards lightning can pose is the wildfires they are capable of igniting. Under a regime of low precipitation (LP) thunderstorms, where little precipitation is present, rainfall cannot prevent fires from starting when vegetation is dry as lightning produces a concentrated amount of extreme heat. Direct damage caused by lightning strikes occurs on occasion. In areas with a high frequency for cloud-to-ground lightning, like Florida, lightning causes several fatalities per year, most commonly to people working outside.
Acid rain is also a frequent risk produced by lightning. Distilled water has a neutral pH of 7. "Clean" or unpolluted rain has a slightly acidic pH of about 5.2, because carbon dioxide and water in the air react together to form carbonic acid, a weak acid (pH 5.6 in distilled water), but unpolluted rain also contains other chemicals. Nitric oxide present during thunderstorm phenomena, caused by the oxidation of atmospheric nitrogen, can result in the production of acid rain, if nitric oxide forms compounds with the water molecules in precipitation, thus creating acid rain. Acid rain can damage infrastructures containing calcite or certain other solid chemical compounds. In ecosystems, acid rain can dissolve plant tissues of vegetations and increase acidification process in bodies of water and in soil, resulting in deaths of marine and terrestrial organisms.
Hail
Any thunderstorm that produces hail that reaches the ground is known as a hailstorm. Thunderclouds that are capable of producing hailstones are often seen obtaining green coloration. Hail is more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888. China also experiences significant hailstorms. Across Europe, Croatia experiences frequent occurrences of hail.
In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley". Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming, is North America's most hail-prone city with an average of nine to ten hailstorms per season. In South America, areas prone to hail are cities like Bogotá, Colombia.
Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, farmers' crops. Hail is one of the most significant thunderstorm hazards to aircraft. When hail stones exceed in diameter, planes can be seriously damaged within seconds. The hailstones accumulating on the ground can also be hazardous to landing aircraft. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage. Hail is one of Canada's most costly hazards. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest recorded incidents occurred around the 9th century in Roopkund, Uttarakhand, India. The largest hailstone in terms of maximum circumference and length ever recorded in the United States fell in 2003 in Aurora, Nebraska, United States.
Tornadoes and waterspouts
A tornado is a violent, rotating column of air in contact with both the surface of the earth and a cumulonimbus cloud (otherwise known as a thundercloud) or, in rare cases, the base of a cumulus cloud. Tornadoes come in many sizes but are typically in the form of a visible condensation funnel, whose narrow end touches the earth and is often encircled by a cloud of debris and dust. Most tornadoes have wind speeds between , are approximately across, and travel several kilometers (a few miles) before dissipating. Some attain wind speeds of more than , stretch more than across, and stay on the ground for more than 100 kilometres (dozens of miles).
The Fujita scale and the Enhanced Fujita Scale rate tornadoes by damage caused. An EF0 tornado, the weakest category, damages trees but does not cause significant damage to structures. An EF5 tornado, the strongest category, rips buildings off their foundations and can deform large skyscrapers. The similar TORRO scale ranges from a T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. Doppler radar data, photogrammetry, and ground swirl patterns (cycloidal marks) may also be analyzed to determine intensity and award a rating.
Waterspouts have similar characteristics as tornadoes, characterized by a spiraling funnel-shaped wind current that form over bodies of water, connecting to large cumulonimbus clouds. Waterspouts are generally classified as forms of tornadoes, or more specifically, non-supercelled tornadoes that develop over large bodies of water. These spiralling columns of air frequently develop within tropical areas close to the equator, but are less common within areas of high latitude.
Flash flood
Flash flooding is the process where a landscape, most notably an urban environment, is subjected to rapid floods. These rapid floods occur more quickly and are more localized than seasonal river flooding or areal flooding and are frequently (though not always) associated with intense rainfall. Flash flooding can frequently occur in slow-moving thunderstorms and is usually caused by the heavy liquid precipitation that accompanies it. Flash floods are most common in arid regions as well as densely populated urban environments, where few plants, and bodies of water are present to absorb and contain the extra water. Flash flooding can be hazardous to small infrastructure, such as bridges, and weakly constructed buildings. Plants and crops in agricultural areas can be destroyed and devastated by the force of raging water. Automobiles parked within affected areas can also be displaced. Soil erosion can occur as well, exposing risks of landslide phenomena.
Downburst
Downburst winds can produce numerous hazards to landscapes experiencing thunderstorms. Downburst winds are generally very powerful, and are often mistaken for wind speeds produced by tornadoes, due to the concentrated amount of force exerted by their straight-horizontal characteristic. Downburst winds can be hazardous to unstable, incomplete, or weakly constructed infrastructures and buildings. Agricultural crops, and other plants in nearby environments can be uprooted and damaged. Aircraft engaged in takeoff or landing can crash. Automobiles can be displaced by the force exerted by downburst winds. Downburst winds are usually formed in areas when high pressure air systems of downdrafts begin to sink and displace the air masses below it, due to their higher density. When these downdrafts reach the surface, they spread out and turn into the destructive straight-horizontal winds.
Thunderstorm asthma
Thunderstorm asthma is the triggering of an asthma attack by environmental conditions directly caused by a local thunderstorm. During a thunderstorm, pollen grains can absorb moisture and then burst into much smaller fragments with these fragments being easily dispersed by wind. While larger pollen grains are usually filtered by hairs in the nose, the smaller pollen fragments are able to pass through and enter the lungs, triggering the asthma attack.
Safety precautions
Most thunderstorms come and go fairly uneventfully; however, any thunderstorm can become severe, and all thunderstorms, by definition, present the danger of lightning. Thunderstorm preparedness and safety refers to taking steps before, during, and after a thunderstorm to minimize injury and damage.
Preparedness
Preparedness refers to precautions that should be taken before a thunderstorm. Some preparedness takes the form of general readiness (as a thunderstorm can occur at any time of the day or year). Preparing a family emergency plan, for example, can save valuable time if a storm arises quickly and unexpectedly. Preparing the home by removing dead or rotting limbs and trees, which can be blown over in high winds, can also significantly reduce the risk of property damage and personal injury.
The National Weather Service (NWS) in the United States recommends several precautions that people should take if thunderstorms are likely to occur:
Know the names of local counties, cities, and towns, as these are how warnings are described.
Monitor forecasts and weather conditions and know whether thunderstorms are likely in the area.
Be alert for natural signs of an approaching storm.
Cancel or reschedule outdoor events (to avoid being caught outdoors when a storm hits).
Take action early so you have time to get to a safe place.
Get inside a substantial building or hard-topped metal vehicle before threatening weather arrives.
If you hear thunder, get to the safe place immediately.
Avoid open areas like hilltops, fields, and beaches, and do not be or be near the tallest objects in an area when thunderstorms are occurring.
Do not shelter under tall or isolated trees during thunderstorms.
If in the woods, put as much distance as possible between you and any trees during thunderstorms.
If in a group, spread out to increase the chances of survivors who could come to the aid of any victims from a lightning strike.
Safety
While safety and preparedness often overlap, "thunderstorm safety" generally refers to what people should do during and after a storm. The American Red Cross recommends that people follow these precautions if a storm is imminent or in progress:
Take action immediately upon hearing thunder. Anyone close enough to the storm to hear thunder can be struck by lightning.
Avoid electrical appliances, including corded telephones. Cordless and wireless telephones are safe to use during a thunderstorm.
Close and stay away from windows and doors, as glass can become a serious hazard in high wind.
Do not bathe or shower, as plumbing conducts electricity.
If driving, safely exit the roadway, turn on hazard lights, and park. Remain in the vehicle and avoid touching metal.
The NWS stopped recommending the "lightning crouch" in 2008 as it does not provide a significant level of protection and will not significantly lower the risk of being killed or injured from a nearby lightning strike.
Frequent occurrences
Thunderstorms occur throughout the world, even in the polar regions, with the greatest frequency in tropical rainforest areas, where they may occur nearly daily. At any given time, approximately 2,000 thunderstorms are occurring on Earth. Kampala and Tororo in Uganda have each been mentioned as the most thunderous places on Earth, a claim also made for Singapore and Bogor on the Indonesian island of Java. Other cities known for frequent storm activity include Darwin, Caracas, Manila and Mumbai. Thunderstorms are associated with the various monsoon seasons around the globe, and they populate the rainbands of tropical cyclones. In temperate regions, they are most frequent in spring and summer, although they can occur along or ahead of cold fronts at any time of year. They may also occur within a cooler air mass following the passage of a cold front over a relatively warmer body of water. Thunderstorms are rare in polar regions because of cold surface temperatures.
Some of the most powerful thunderstorms over the United States occur in the Midwest and the Southern states. These storms can produce large hail and powerful tornadoes. Thunderstorms are relatively uncommon along much of the West Coast of the United States, but they occur with greater frequency in the inland areas, particularly the Sacramento and San Joaquin Valleys of California. In spring and summer, they occur nearly daily in certain areas of the Rocky Mountains as part of the North American Monsoon regime. In the Northeast, storms take on similar characteristics and patterns as the Midwest, but with less frequency and severity. During the summer, air-mass thunderstorms are an almost daily occurrence over central and southern parts of Florida.
Energy
If the quantity of water that is condensed in and subsequently precipitated from a cloud is known, then the total energy of a thunderstorm can be calculated. In a typical thunderstorm, approximately 5×108 kg of water vapor are lifted, and the amount of energy released when this condenses is 1015 joules. This is on the same order of magnitude of energy released within a tropical cyclone, and more energy than that released during the atomic bomb blast at Hiroshima, Japan in 1945.
The Fermi Gamma-ray Burst Monitor results show that gamma rays and antimatter particles (positrons) can be generated in powerful thunderstorms. It is suggested that the antimatter positrons are formed in terrestrial gamma-ray flashes (TGF). TGFs are brief bursts occurring inside thunderstorms and associated with lightning. The streams of positrons and electrons collide higher in the atmosphere to generate more gamma rays. About 500 TGFs may occur every day worldwide, but mostly go undetected.
Studies
In more contemporary times, thunderstorms have taken on the role of a scientific curiosity. Every spring, storm chasers head to the Great Plains of the United States and the Canadian Prairies to explore the scientific aspects of storms and tornadoes through use of videotaping. Radio pulses produced by cosmic rays are being used to study how electric charges develop within thunderstorms. More organized meteorological projects such as VORTEX2 use an array of sensors, such as the Doppler on Wheels, vehicles with mounted automated weather stations, weather balloons, and unmanned aircraft to investigate thunderstorms expected to produce severe weather. Lightning is detected remotely using sensors that detect cloud-to-ground lightning strokes with 95 percent accuracy in detection and within of their point of origin.
Mythology and religion
Thunderstorms strongly influenced many early civilizations. Greeks believed that they were battles waged by Zeus, who hurled lightning bolts forged by Hephaestus. Some American Indian tribes associated thunderstorms with the Thunderbird, who they believed was a servant of the Great Spirit. The Norse considered thunderstorms to occur when Thor went to fight Jötnar, with the thunder and lightning being the effect of his strikes with the hammer Mjölnir. Hinduism recognizes Indra as the god of rain and thunderstorms. Christian doctrine accepts that fierce storms are the work of God. These ideas were still within the mainstream as late as the 18th century.
Martin Luther was out walking when a thunderstorm began, causing him to pray to God for being saved and promising to become a monk.
Outside of Earth
Thunderstorms, evidenced by flashes of lightning, on Jupiter have been detected and are associated with clouds where water may exist as both a liquid and ice, suggesting a mechanism similar to that on Earth. (Water is a polar molecule that can carry a charge, so it is capable of creating the charge separation needed to produce lightning). These electrical discharges can be up to a thousand times more powerful than lightning on the Earth. The water clouds can form thunderstorms driven by the heat rising from the interior. The clouds of Venus may also be capable of producing lightning; some observations suggest that the lightning rate is at least half of that on Earth.
| Physical sciences | Storms | null |
70866 | https://en.wikipedia.org/wiki/Bombyx%20mori | Bombyx mori | Bombyx mori, commonly known as the domestic silk moth, is a moth species belonging to the family Bombycidae. It is the closest relative of Bombyx mandarina, the wild silk moth. Silkworms are the larvae of silk moths. The silkworm is of particular economic value, being a primary producer of silk. The silkworm's preferred food are the leaves of white mulberry, though they may eat other species of mulberry, and even leaves of other plants like the Osage orange. Domestic silk moths are entirely dependent on humans for reproduction, as a result of millennia of selective breeding. Wild silk moths, which are other species of Bombyx, are not as commercially viable in the production of silk.
Sericulture, the practice of breeding silkworms for the production of raw silk, has existed for at least 5,000 years in China, whence it spread to India, Korea, Nepal, Japan, and then the West. The conventional process of sericulture kills the silkworm in the pupal stage. The domestic silk moth was domesticated from the wild silk moth Bombyx mandarina, which has a range from northern India to northern China, Korea, Japan, and the far eastern regions of Russia. The domestic silk moth derives from Chinese rather than Japanese or Korean stock.
Silk moths were unlikely to have been domestically bred before the Neolithic period. Before then, the tools to manufacture quantities of silk thread had not been developed. The domesticated Bombyx mori and the wild Bombyx mandarina can still breed and sometimes produce hybrids. It is unknown if B. mori can hybridize with other Bombyx species. Compared to most members in the genus Bombyx, domestic silk moths have lost their coloration as well as their ability to fly.
Types
Mulberry silkworms can be divided into three major categories based on seasonal brood frequency. Univoltine silkworms produce only one brood a season, and they are generally found in and around Europe. Univoltine eggs must hibernate through the winter, ultimately cross-fertilizing in spring. Bivoltine varieties are normally found in East Asia, and their accelerated breeding process is made possible by slightly warmer climates. In addition, there are polyvoltine silkworms found only in the tropics. Their eggs typically hatch within 9 to 12 days, meaning there can be up to eight generations of larvae throughout the year.
Description and life cycle
Larvae
Eggs take about 14 days to hatch into larvae, which eat continuously. They have a preference for white mulberry, having an attraction to the mulberry odorant cis-jasmone. They are not monophagous, since they can eat other species of Morus, as well as some other Moraceae, mostly Osage orange. They are covered with tiny black hairs. When the color of their heads turns darker, it indicates they are about to molt. After molting, the larval phase of the silkworms emerge white, naked, and with little horns on their backs.
Pupae (cocoon)
After they have molted four times, their bodies become slightly yellow, and the skin becomes tighter. The larvae then prepare to enter the pupal phase of their life cycle, and enclose themselves in a cocoon made up of raw silk produced by the salivary glands. The final molt from larva to pupa takes place within the cocoon, which provides a layer of protection during the vulnerable, almost motionless pupal state. Many other Lepidoptera produce cocoons, but only a few — the Bombycidae, in particular the genus Bombyx, and the Saturniidae, in particular the genus Antheraea — have been exploited for fabric production.
The cocoon is made of a thread of raw silk from long. The fibers are fine and lustrous, about in diameter. About 2,000 to 3,000 cocoons are required to make . At least of raw silk are produced each year, requiring nearly 10 billion cocoons.
If the animal survives through the pupal phase of its life cycle, it releases proteolytic enzymes to make a hole in the cocoon so it can emerge as an adult moth. These enzymes are destructive to the silk and can cause the silk fibers to break down from over a mile in length to segments of random length, which reduces the value of the silk threads, although these damaged silk cocoons are still used as "stuffing" available in China and elsewhere in the production of duvets, jackets, and other purposes. To prevent this, silkworm cocoons are boiled in water. The heat kills the silkworms, and the water makes the cocoons easier to unravel. Often, the silkworm is eaten.
As the process of harvesting the silk from the cocoon kills the pupa, sericulture has been criticized by animal welfare and rights activists. Mahatma Gandhi was critical of silk production based on the ahimsa philosophy "not to hurt any living thing". This led to Gandhi's promotion of cotton spinning machines, an example of which can be seen at the Gandhi Institute, and an extension of this principle has led to the modern production practice known as Ahimsa silk, which is wild silk (from wild and semiwild silk moths) made from the cocoons of moths that are allowed to emerge before the silk is harvested.
Moth
The moth is the adult phase of the silk worm's life cycle. Silk moths have a wingspan of and a white, hairy body. Females are about two to three times bulkier than males (due to carrying many eggs). All adult Bombycidae moths have reduced mouthparts and do not feed.
The wings of the silk moth develop from larval imaginal disks. The moth is not capable of functional flight, in contrast to the wild B. mandarina and other Bombyx species, whose males fly to meet females. Some may emerge with the ability to lift off and stay airborne, but sustained flight cannot be achieved as their bodies are too big and heavy for their small wings.
The legs of the silk moth develop from the silkworm's larval (thoracic) legs. Developmental genes like Distalless and extradenticle have been used to mark leg development. In addition, removing specific segments of the thoracic legs at different ages of the larva resulted in the adult silk moth not developing the corresponding adult leg segments.
Research
Due to its small size and ease of culture, the silkworm has become a model organism in the study of lepidopteran and general arthropod biology. Fundamental findings on genetics, pheromones, hormones, brain structures, and physiology have been made with the silkworm. One example of this was the molecular identification of the first known pheromone, bombykol, which required extracts from 500,000 individuals, due to the small quantities of pheromone produced by any individual silkworm.
Many research works have focused on the genetics of silkworms and the possibility of genetic engineering. Many hundreds of strains are maintained, and over 400 Mendelian mutations have been described. Another source suggests 1,000 inbred domesticated strains are kept worldwide. One useful development for the silk industry is silkworms that can feed on food other than mulberry leaves, including an artificial diet. Research on the genome also raises the possibility of genetically engineering silkworms to produce proteins, including pharmacological drugs, in the place of silk proteins. Bombyx mori females are also one of the few organisms with homologous chromosomes held together only by the synaptonemal complex (and not crossovers) during meiosis. In the oocytes of B. mori, meiosis is completely achiasmate (lacking crossovers). Even though synaptonemal complexes are formed during the pachytene stage of meiosis in B. mori, crossing-over homologous recombination does not occur between the paired chromosomes.
Kraig Biocraft Laboratories
has used research from the Universities of Wyoming and Notre Dame in a collaborative effort to create a silkworm that is genetically altered to produce spider silk. In September 2010, the effort was announced as successful.
Researchers at Tufts developed scaffolds made of spongy silk that feel and look similar to human tissue. They are implanted during reconstructive surgery to support or restructure damaged ligaments, tendons, and other tissue. They also created implants made of silk and drug compounds which can be implanted under the skin for steady and gradual time release of medications.
Researchers at the MIT Media Lab experimented with silkworms to see what they would weave when left on surfaces with different curvatures. They found that on particularly straight webs of lines, the silkworms would connect neighboring lines with silk, weaving directly onto the given shape. Using this knowledge they built a silk pavilion with 6,500 silkworms over a number of days.
Silkworms have been used in antibiotic discovery, as they have several advantageous traits compared to other invertebrate models. Antibiotics such as lysocin E, a non-ribosomal peptide synthesized by Lysobacter sp. RH2180-5 and GPI0363 are among the notable antibiotics discovered using silkworms.
In addition, antibiotics with appropriate pharmacokinetic parameters were selected that correlated with therapeutic activity in the silkworm infection model.
Silkworms have also been used for the identification of novel virulence factors of pathogenic microorganisms. A first large-scale screening using transposon mutant library of Staphylococcus aureus USA300 strain was performed which identified 8 new genes with roles in full virulence of S. aureus. Another study by the same team of researchers revealed, for the first time, the role of YjbH in virulence and oxidative stress tolerance in vivo.
Domestication
The domestic species B. mori, compared to the wild species (e.g., B. mandarina), has increased cocoon size, body size, growth rate, and efficiency of its digestion. It has gained tolerance to human presence and handling, and also to living in crowded conditions. The domestic silk moths cannot fly, so the males need human assistance in finding a mate, and it lacks fear of potential predators. The native color pigments have also been lost, so the domestic silk moths are leucistic, since camouflage is not useful when they only live in captivity. These changes have made B. mori entirely dependent upon humans for survival, and it does not exist in the wild. The eggs are kept in incubators to aid in their hatching.
Breeding
Silkworms were first domesticated in China more than 5,000 years ago.
Silkworm breeding is aimed at the overall improvement of silkworms from a commercial point of view. The major objectives are improving fecundity, the health of larvae, quantity of cocoon and silk production, and disease resistance. Healthy larvae lead to a healthy cocoon crop. Health is dependent on factors such as better pupation rate, fewer dead larvae in the mountage, shorter larval duration (this lessens the chance of infection) and bluish-tinged fifth-instar larvae (which are healthier than the reddish-brown ones). Quantity of cocoon and silk produced are directly related to the pupation rate and larval weight. Healthier larvae have greater pupation rates and cocoon weights. Quality of cocoon and silk depends on a number of factors, including genetics.
Hobby raising and school projects
In the U.S., teachers may sometimes introduce the insect life cycle to their students by raising domestic silk moths in the classroom as a science project. Students have a chance to observe complete life cycles of insects from eggs to larvae to pupae to moths.
The domestic silk moth has been raised as a hobby in countries such as China, South Africa, Zimbabwe, and Iran. Children often pass on the eggs to the next generation, creating a non-commercial population. The experience provides children with the opportunity to witness the life cycle of silk moths.
Genome
The full genome of the domestic silk moth was published in 2008 by the International Silkworm Genome Consortium. Draft sequences were published in 2004.
The genome of the domestic silk moth is mid-range with a genome size around 432 million base pairs. A notable feature is that 43.6% of the genome are repetitive sequences, most of which are transposable elements. At least 3,000 silkworm genes are unique, and have no homologous equivalents in other genomes. The silkworm's ability to produce large amounts of silk correlates with the presence of specific tRNA clusters, as well as some clustered sericin genes. Additionally, the silkworm's ability to consume toxic mulberry leaves is linked to specialized sucrase genes, which appear to have been acquired from bacterial genes.
In 2018, Illumina's short reads for 137 strain genomes were published. In 2022, Nanopore's long reads for 545 strain genomes were published.
As food
Silk moth pupae are edible insects and are eaten in some cultures:
In Assam, India, they are boiled for extracting silk and the boiled pupae are eaten directly with salt or fried with chili pepper or herbs as a snack or dish. Live pupae may be eaten raw, boiled or fried.
In Korea, they are boiled and seasoned to make a popular snack food known as beondegi (번데기).
In China, street vendors sell roasted silk moth pupae.
In Japan, silkworms are usually served as a tsukudani (佃煮), i.e., boiled in a sweet-sour sauce made with soy sauce and sugar.
In Vietnam, this is known as , usually boiled, seasoned with fish sauce, then stir-fried and eaten as main dish with rice.
In Thailand, roasted silkworm is often sold at open markets. They are also sold as packaged snacks.
Silkworms have also been proposed for cultivation by astronauts as space food on long-term missions.
In culture
China
In China, a legend indicates the discovery of the silkworm's silk was by an ancient empress named Leizu, the wife of the Yellow Emperor, also known as Xi Lingshi. She was drinking tea under a tree when a silk cocoon fell into her tea. As she picked it out and started to wrap the silk thread around her finger, she slowly felt a warm sensation. When the silk ran out, she saw a small larva. In an instant, she realized this caterpillar larva was the source of the silk. She taught this to the people and it became widespread. Many more legends about the silkworm are told.
The Chinese guarded their knowledge of silk, but, according to one story, a Chinese princess given in marriage to a Khotan prince brought to the oasis the secret of silk manufacture, "hiding silkworms in her hair as part of her dowry", probably in the first half of the first century AD. About AD 550, Christian monks are said to have smuggled silkworms hidden in a hollow stick out of China, selling the secret to the eastern Romans.
Vietnam
According to a Vietnamese folk tale, silkworms were originally a beautiful housemaid running away from her gruesome masters and living in the mountain, where she was protected by the mountain god. One day, a lecherous god from the heaven came down to Earth to seduce women. When he saw her, he tried to rape her but she was able to escape and was hidden by the mountain god. The lecherous god then tried to find and capture her by setting a net trap around the mountain. With the blessing of Guanyin, the girl was able to safely swallow that net into her stomach. Finally, the evil god summons his fellow thunder and rain gods to attack and burn away her clothes, forcing her to hide in a cave. Naked and cold, she spit out the net and used it as a blanket to sleep. The girl died in her sleep, and as she wished to continue to help other people, her soul turned into silkworms.
Feeding
Bombyx mori is essentially monophagous, exclusively eating mulberry leaves (Morus spp.). By developing techniques for using artificial diets, the amino acids needed for development are known. The various amino acids can be classified into five categories:
Those which, when removed, cause larval development to stop entirely: lysine, leucine, isoleucine, histidine, arginine, valine, tryptophan, threonine, phenylalanine, methionine
Those which, when removed, impede later stages of larval development: glutamate and aspartate
Semi-essential amino acids, with negative effects that can be eliminated by supplementing with other amino acids: proline (ornithine can be substituted)
Non-essential amino acids that can by replaced through biosynthesis by the larvae: alanine, glycine, serine
Non-essential amino acids that can be removed with no effect at all: tyrosine
Diseases
Beauveria bassiana, a fungus, destroys the entire silkworm body. This fungus usually appears when silkworms are raised under cold conditions with high humidity. This disease is not passed on to the eggs from moths, as the infected silkworms cannot survive to the moth stage. This fungus, however, can spread to other insects.
Grasserie, also known as nuclear polyhedrosis, milky disease, or hanging disease, is caused by infection with the Bombyx mori nucleopolyhedrovirus (aka Bombyx mori nuclear polyhedrosis virus, genus Alphabaculovirus). If grasserie is observed in the chawkie stage, then the chawkie larvae must have been infected while hatching or during chawkie rearing. Infected eggs can be disinfected by cleaning their surfaces prior to hatching. Infections can occur as a result of improper hygiene in the chawkie rearing house. This disease develops faster in early instar rearing.
Pébrine is a disease caused by a parasitic microsporidian, Nosema bombycis. Diseased larvae show slow growth, undersized, pale and flaccid bodies, and poor appetite. Tiny black spots appear on larval integument. Additionally, dead larvae remain rubbery and do not undergo putrefaction after death. N. bombycis kills 100% of silkworms hatched from infected eggs. This disease can be carried over from worms to moths, then to eggs and worms again. This microsporidium comes from the food that the silkworms eat. Female moths pass the disease to the eggs, and 100% of silkworms hatching from the diseased eggs die in their worm stage. To prevent this disease, eggs from infected moths are ruled out by checking the moth's body fluid under a microscope.
Flacherie infected silkworms look weak and are colored dark brown before they die. The disease destroys the larva's gut and is caused by viruses or poisonous food.
Several diseases caused by a variety of funguses are collectively named Muscardine.
| Biology and health sciences | Lepidoptera | null |
70868 | https://en.wikipedia.org/wiki/Flamethrower | Flamethrower | A flamethrower is a ranged incendiary device designed to project a controllable jet of fire. First deployed by the Byzantine Empire in the 7th century AD, flamethrowers saw use in modern times during World War I, and more widely in World War II as a tactical weapon against fortifications.
Most military flamethrowers use liquid fuel, typically either heated oil or diesel, but commercial flamethrowers are generally blowtorches using gaseous fuels such as propane. Gases are safer in peacetime applications because their flames have less mass flow rate and dissipate faster, and often are easier to extinguish.
Apart from the military applications, flamethrowers have peacetime applications where there is a need for controlled burning, such as in sugarcane harvesting and other land-management tasks. Various forms are designed for an operator to carry, while others are mounted on vehicles.
Military use
Modern flamethrowers were first used during the trench warfare conditions of World War I and their use greatly increased in World War II. They can be vehicle-mounted, as on a tank, or man-portable.
The man-portable flamethrower consists of two elements—the backpack and the gun. The backpack element usually consists of two or three cylinders. In a two-cylinder system, one cylinder holds compressed, inert propellant gas (usually nitrogen), and the other holds flammable liquid, typically some form of petrochemical. A three-cylinder system often has two outer cylinders of flammable liquid and a central cylinder of propellant gas to maintain the balance of the soldier carrying it. The gas propels the liquid fuel out of the cylinder through a flexible pipe and then into the gun element of the flamethrower system. The gun consists of a small reservoir, a spring-loaded valve, and an ignition system; depressing a trigger opens the valve, allowing pressurized flammable liquid to flow and pass over the igniter and out of the gun nozzle. The igniter can be one of several ignition systems: A simple type is an electrically heated wire coil; another used a small pilot flame, fueled with pressurized gas from the system.
Flamethrowers were primarily used against battlefield fortifications, bunkers, and other protected emplacements. A flamethrower projects a stream of flammable liquid, rather than flame, which allows bouncing the stream off walls and ceilings to project the fire into unseen spaces, such as inside bunkers or pillboxes. Typically, popular visual media depict the flamethrower as short-ranged and only effective for a few metres (due to the common use of propane gas as the fuel in flamethrowers in movies, for the safety of the actors). Contemporary flamethrowers can incinerate a target some from the operator; moreover, an unignited stream of flammable liquid can be fired and afterwards ignited, possibly by a lamp or other flame inside the bunker.
Flamethrowers pose many risks to the operator. The first disadvantage is the weapon's weight and length, which impairs the soldier's mobility. The weapon is limited to only a few seconds of burn time, since it uses fuel very quickly, requiring the operator to be precise and conservative. Flamethrowers using a fougasse-style explosive propellant system also have a limited number of shots. The weapon is very visible on the battlefield, which causes operators to become immediately singled out as prominent targets, especially for snipers and designated marksmen. Flamethrower operators were rarely taken prisoner, especially when their target survived an attack by the weapon; captured flamethrower users were in some cases summarily executed.
The flamethrower's effective range is short in comparison with that of other battlefield weapons of similar size. To be effective, flamethrower soldiers must approach their target, risking exposure to enemy fire. Vehicular flamethrowers also have this problem; they may have considerably greater range than a man-portable flamethrower, but their range is still short compared with that of other infantry weapons.
The risk of a flamethrower operator being caught in the explosion of their weapon due to enemy hits on the tanks is exaggerated in films. In some cases, the pressure tanks have exploded and killed the operator when hit by bullets or grenade shrapnel. In the documentary Vietnam in HD, platoon sergeant Charles Brown tells of how one of his men was killed when his flamethrower was hit by grenade shrapnel during the battle for Hill 875.
The pressurizer is filled with a non-flammable gas that is under high pressure. If this tank ruptures, it might knock the operator forward as it was expended in the same way a pressurized aerosol can bursts outward when punctured. The fuel mixture in the containers is difficult to light, which is why magnesium-filled igniters are required when the weapon is fired. When pierced by a bullet, a metal can filled with diesel or napalm will merely leak unless the round is an incendiary type that may ignite the mixture inside.
The best way to minimize the disadvantages of flame weapons was to mount them on armoured vehicles. The Commonwealth and the United States were the most prolific users of vehicle-mounted flame weapons; the British and Canadians fielded "Wasps" (Universal Carriers fitted with flamethrowers) at infantry battalion level, beginning in mid-1944, and eventually incorporating them into infantry battalions. Early tank-mounted flamethrower vehicles included the "Badger" (a converted Ram tank) and the "Oke", used first at Dieppe.
Operation
A propane-operated flamethrower is a straightforward device. The gas is expelled through the gun assembly by its own pressure and is ignited at the exit of the barrel through piezo ignition.
Liquid-operated flamethrowers use a smaller tank with a pressurized gas to expel the flammable liquid fuel. The propellant gas is fed to two tubes. The first opens in the fuel tanks providing the pressure necessary for expelling the liquid. The other tube leads to an ignition chamber behind the exit of the gun assembly, where it is mixed with air and ignited through piezo ignition. This pre-ignition line is the source of the flame seen in front of the gun assembly in movies and documentaries. As the fuel passes through the flame, it is ignited and propelled towards the target.
History
Ancient Greece
The concept of projecting fire as a weapon has existed since ancient times. During the Peloponnesian War, Boeotians used some kind of a flamethrower trying to destroy the fortification walls of the Athenians during the Battle of Delium.
Roman Empire
In 107 AD the Romans used a flamethrower against the Dacians; the device was similar to the one used at Delium.
Later, during the Byzantine era, sailors used rudimentary hand-pumped flamethrowers on board their naval ships. Greek fire, extensively used by the Byzantine Empire, is said to have been invented by Kallinikos of Heliopolis, probably about 673 AD. Byzantine texts described weapons, used by Byzantine land forces, which were shooting Greek fire and called cheirosiphona (χειροσίφωνα, meaning hand-held siphons, singular χειροσίφωνο). The flamethrower found its origins in a device consisting of a hand-held pump that shot bursts of Greek fire via a siphon-hose and a piston which ignited it with a match, similar to modern versions, as it was ejected. An illustration in Poliorcetica of Hero of Byzantium display a soldier with a portable flamethrower. Byzantines also used ceramic hand grenades filled with Greek fire.
Greek fire, used primarily at sea, gave the Byzantines a substantial military advantage against enemies such as members of the Arab Empire (who later adopted the use of Greek fire). An 11th-century illustration of its use survives in the John Skylitzes manuscript.
China
The Pen Huo Qi ("fire spraying device") was a Chinese piston flamethrower that used a substance similar to petrol or naphtha, invented around 919 AD during the Five Dynasties and Ten Kingdoms period. The earliest reference to Greek fire in China was made in 917, written by Wu Renchen in his Spring and Autumn Annals of the Ten Kingdoms. In 919, the siphon projector-pump was used to spread the 'fierce fire oil' that could not be doused with water, as recorded by Lin Yu (林禹) in his Wu-Yue Beishi (吳越備史), hence the first credible Chinese reference to the flamethrower employing the chemical solution of Greek fire. Lin Yu mentioned also that the 'fierce fire oil' derived ultimately from China's contact in the 'southern seas', with Arabia (大食國 Dashiguo). In the Battle of Langshan Jiang (Wolf Mountain River) in 919, the naval fleet of the Wenmu King of Wuyue defeated the fleet of the Kingdom of Wu because he had used 'fire oil' to burn his fleet; this signified the first Chinese use of gunpowder in warfare, since a slow-burning match fuse was required to ignite the flames. The Chinese applied the use of double-piston bellows to pump petrol out of a single cylinder (with an upstroke and a downstroke), lit at the end by a slow-burning gunpowder match to fire a continuous stream of flame (as referred to in the Wujing Zongyao manuscript of 1044). In the suppression of the Southern Tang state by 976 AD, early Song naval forces confronted them on the Yangtze River in 975. Southern Tang forces attempted to use flamethrowers against the Song navy, but were accidentally consumed by their own fire when violent winds swept in their direction. Documented also in later Chinese publications, illustrations and descriptions of mobile flamethrowers on four-wheel push carts appear in the Wujing Zongyao, written in 1044 (its illustration redrawn in 1601 as well). Advances in military technology aided the Song dynasty in its defense against hostile neighbours to the north, including the Mongols.
Islamic World
Abū ʿAbdallāh al-Khwārazmī in Mafātīḥ al-ʿUlūm (“Keys to the Sciences”) ca. 976 AD mentions the bāb al-midfa and the bāb al-mustaq which he said were parts of naphtha-throwers and projectors (al-naffātāt wa al-zarāqāt). Book of Ingenious Mechanical Device (Kitāb fī ma 'rifat al-ḥiyal al-handasiyya) of 1206 by Ibn al-Razzaz al-Jazari mentioned about ejectors of naphtha (zarāqāt al-naft).
Vietnam
The book Hồ trường khu cơ by Dao Duy Tu describes a flamethrowing weapon called the fire tiger "The fire-thrower is also called fire-tiger, has a large bulb about one meter long, when in battle it spits fire, the tube spits out pine resin, if it hits something, it immediately catches fire...". "Because the fire burns fiercely, it is called fire-tiger". These weapons were later used by the Tay Son Army
18th century
In 1702, the Prussian Army tested P. Lange's "serpent-fire-spray'' (Schlangen-Brand-Spritze) who produced a jet of fire wide and long; two years later it was rejected as useless.
Peter the Great's chief engineer Vasily Dmitrievich Korchmin designed various incendiary weapons.such as incendiary rockets and furnaces for heating cannonball; two Russian ships the “Svyatoy Yakov” and “Landsou” were armed with flamethrower tubes designed by him. He also developed instructions for their use together with the Tsar.
In the 1750s a French engineer named Dupre, developed a new flammable mixture; it was tested in LeHavre and set fire to a sloop. During the British shelling of LeHavre in 1759, the French War Minister tried to obtain authorization to use this fuel.
19th century
Although flamethrowers were never used in the American Civil War, "Greek Fire" shells were produced and used by Union troops during the Second Battle of Charleston Harbor.
During the 1871 siege of Paris, French chemist Marcellin Berthelot suggested pumping flaming petroleum at Prussian troops.
In 1898 Russian captain Sigern-Korn experimented with burning jets of kerosene for defensive use; in theory in they would be fired from parapets of fortifications. The idea was abandoned due to technical issues
Early 20th century
During the siege of Port Arthur, Japanese combat engineers used hand pumps to spray kerosene into Russian trenches. Once the Russians were covered with the flammable liquid, the Japanese would throw bundles of burning rags at them.
Before WW1 German pioneers used the Brandröhre M.95 a weapon consisting of a sheet metal tube ( wide and long) filled with an incendiary mixture, and a friction igniter activated by a lanyard. The Brandröhre was designed to be used against enemy casemates; a long pole was used to reach the target and the lanyard was pulled to ignite the fuel; producing a long stream of fire. Those weapons were deployed in six-man teams and were limited by their short range. In theory the Brandröhre was replaced by the flamethrower in 1909 but it was still in use in WW1; it was used during the assaults on Fort du Camp-des-Romains in 1914 and Fort Vaux in 1916.
Bernhard Reddeman, a German military officer and former fireman, converted steam powered fire engines into flamethrowers; his design was demonstrated in 1907.
The English word flamethrower is a loan-translation of the German word Flammenwerfer, since the modern flamethrower was invented in Germany. The first flamethrower, in the modern sense, is usually credited to Richard Fiedler. He submitted evaluation models of his Flammenwerfer to the German Army in 1901. The most significant model submitted was a portable device, consisting of a vertical single cylinder long, horizontally divided in two, with pressurized gas in the lower section and flammable oil in the upper section. On depressing a lever the propellant gas forced the flammable oil into and through a rubber tube and over a simple igniting wick device in a steel nozzle. The weapon projected a jet of fire and enormous clouds of smoke some . It was a single-shot weapon—for burst firing, a new igniter section was attached each time. In 1905 Fiedler's flamethrower was demonstrated to the Prussian Committee of Engineers. In 1908 Fiedler started working with Reddeman and made some adjustments to the design; an experimental pioneer company was created to further test the weapon.
It was not until 1911 that the German Army accepted their first real flamethrowing device, creating a specialist regiment of twelve companies equipped with Flammenwerfer Apparent. Despite this, use of fire in a World War I battle predated flamethrower use, with a petrol spray being ignited by an incendiary bomb in the Argonne-Meuse sector in October 1914.
The flamethrower was first used in World War I on 26 February 1916 when it was briefly used against the French outside Verdun. On 30 July 1915 it was first used in a concerted action, against British trenches at Hooge, where the lines were apart—even there, the casualties were caused mainly by soldiers being flushed into the open and then shot rather than from the fire itself. After two days of fighting the British had suffered casualties of 31 officers and 751 other ranks.
The success of the attack prompted the German Army to adopt the device on all fronts. Flamethrowers were used in squads of six during battles, at the start of an attack destroying the enemy and to the preceding the infantry advance.
The flamethrower was useful at short distances but had other limitations: it was cumbersome and difficult to operate and could only be safely fired from a trench, which limited its use to areas where the opposing trenches were less than the maximum range of the weapon, namely apart—which was not a common situation; the fuel would also only last for about a minute of continuous firing.
The German deployed flamethrowers during the war in more than 650 attacks.
The Ottoman Empire received 30 flamethrowers from Germany during the war.
German flamethrowers were also used by Bulgarian forces.
Austria-Hungary adopted German designs; but also developed its own flamethrowers in 1915. These included the M.15 Flammenwerfer, which required a crew of three men and was too unwieldy for offensive use; a defensive model and a more portable model were also produced. Austro-Hungarian flamethrowers were unreliable and long hoses were used to prevent the shooter from igniting the fuel tank
The British experimented with flamethrowers in the Battle of the Somme, during which they used experimental weapons called "Livens Large Gallery Flame Projectors", named for their inventor, William Howard Livens, a Royal Engineers officer. This weapon was enormous and completely non-portable. The weapon had an effective range of , which proved effective at clearing trenches, but with no other benefit the project was abandoned.
Two Morriss static flamethrowers were mounted in HMS Vindictive and several Hay portable flamethrowers were deployed by the Royal Navy during the Zeebrugge Raid on 23 April 1918. A British newspaper report of the action referred to the British flamethrowers only as flammenwerfer, using the German word.
The French Army deployed the Schilt family of flamethrowers, which were also used by the Italian Army.
In 1931 the São Paulo Public Force created an assault car section. The first vehicle to be incorporated was a tank built from a Caterpillar Twenty Two tractor, featuring a turret mounted flamethrower and four Hotchkiss machineguns on the hull. It was used in combat during the Constitutionalist Revolution, routing federal troop from a bridge in an engagement in Cruzeiro.
In the interwar period, at least four flamethrowers were used in the Chaco War by the Bolivian Army, during the unsuccessful assault on the Paraguayan stronghold of Nanawa in 1933. During the battle of Kilometer 7 to Saavedra, Major Walther Kohn rode in a flamethrower equipped tankette; due to heat he exited the tank to fight on foot and was killed in combat.
World War II
The flamethrower was used extensively during World War II. In 1939, the Wehrmacht first deployed man-portable flamethrowers against the Polish Post Office in Danzig. Subsequently, in 1942, the U.S. Army introduced its own man-portable flamethrower. The vulnerability of infantry carrying backpack flamethrowers and the weapon's short range led to experiments with tank-mounted flamethrowers (flame tanks), which were used by many countries.
Axis use
Germany
The Germans made considerable use of the weapon (Flammenwerfer 35) during their invasion of the Netherlands and France, against fixed fortifications. World War II German army flamethrowers tended to have one large fuel tank with the pressurizer tank fastened to its back or side. Some German army flamethrowers occupied only the lower part of its wearer's back, leaving the upper part of his back free for an ordinary rucksack.
Flamethrowers soon fell into disfavour. Flamethrowers were extensively used by German units in urban fights in Poland, both in 1943 in the Warsaw Ghetto Uprising and in 1944 in the Warsaw Uprising (see the Stroop Report and the article on the 1943 Warsaw Ghetto Uprising). With the contraction of the Third Reich during the latter half of World War II, a smaller, more compact flamethrower known as the Einstossflammenwerfer 46 was produced.
Germany also used flamethrower vehicles, most of them based on the chassis of the Sd.Kfz. 251 half track and the Panzer II and Panzer III tanks, generally known as Flammpanzers.
The Germans also produced the Abwehrflammenwerfer 42, a flame-mine or flame fougasse, based on a Soviet version of the weapon. This was essentially a disposable, single use flamethrower that was buried alongside conventional land mines at key defensive points and triggered by either a trip-wire or a command wire. The weapon contained around of fuel, that was discharged within a second, to a second and a half, producing a flame with a range. One defensive installation found in Italy included seven of the weapons, carefully concealed and wired to a central control point.
Finland
During the Winter War Finland adopted the Italian Lanciafiamme Modello 35 as the Liekinheitin M/40; 176 flamethrowers were ordered but only 28 arrived before the end of the war.
Those flamethrowers were not used in the Winter War; but were issued to engineers during the Continuation War along with captured ROKS-2 flamethrowers
OT-130 and OT-133 flame tanks were captured from the Soviet Union and issued at the start of the Continuation War; they were considered impratical and later retrofitted with cannons.
In 1944 they developed and adopted the Liekinheitin M/44.
Italy
Italy employed man-portable flamethrowers and L3 Lf flame tanks during the Second Italo-Abyssinian War of 1935 to 1936, during the Spanish Civil War, and during World War II. The L3 Lf flame tank was a CV-33 or CV-35 tankette with a flamethrower operating from the machine gun mount. In the Northern Africa Theatre, the L3 Lf flame tank found little to no success. An L6 Lf flametank was also developed using the L6/40 light tank platform.
Japan
Japan used man-portable flamethrowers to clear fortified positions, in the Battle of Wake Island, Corregidor, Battle of the Tenaru on Guadalcanal and Battle of Milne Bay.
Romania
Flamethrowers were also used by the Royal Romanian Army. They were also planned to become self-propelled; the Mareșal tank destroyer was planned to have a command vehicle version armed with machine guns and a flamethrower.
Allies
Britain and the Commonwealth
The British World War II army flamethrowers, "Ack Packs", had a doughnut-shaped fuel tank with a small spherical pressurizer gas tank in the middle. As a result, some troops nicknamed them "lifebuoys". It was officially known as Flamethrower, Portable, No 2.
Extensive plans were made in 1940–1941 by the Petroleum Warfare Department to use flame fougasse static flame projectors in the event of an invasion, with around 50,000 barrel-based incendiary mines being deployed in 7,000 batteries throughout Southern England.
The British hardly used their man-portable systems, relying on Churchill Crocodile tanks in the European theatre. These tanks proved very effective against German defensive positions, and caused official Axis protests against their use. This flamethrower could produce a jet of flame exceeding . There are documented instances of German units summarily executing any captured British flame-tank crews.
In the Pacific theatre, Australian forces used converted Matilda tanks, known as Matilda Frogs.
United States
In the Pacific theatre, the U.S. Army used M-1 and M-2 flamethrowers to clear stubborn Japanese resistance from prepared defenses, caves, and trenches. Starting in New Guinea, through the closing stages on Guadalcanal and during the approach to and reconquest of the Philippines and then through the Okinawa campaign, the Army deployed hand-held, man-portable units.
Often flamethrower teams were made up of combat engineer units, later with troops of the chemical warfare service. The Army fielded more flamethrower units than the Marine Corps, and the Army's Chemical Warfare Service pioneered tank mounted flamethrowers on Sherman tanks (CWS-POA H-4). All the flamethrower tanks on Okinawa belonged to the 713th Provisional Tank Battalion, which was tasked with supporting all U.S. Army and Marine infantry. All Pacific mechanized flamethrower units were trained by Seabee specialists with Col. Unmacht's CWS Flamethrower Group in Hawaii.
The U.S. Army used flamethrowers in Europe in much smaller numbers, though they were available for special employments. Flamethrowers were deployed during the Normandy landings in order to clear Axis fortifications. Also, most boat teams on Omaha Beach included a two-man flamethrower team.
The Marine Corps used the backpack-type M2A1-7 and M2-2 flamethrowers, finding them useful in clearing Japanese trench and bunker complexes. The first known USMC use of the man portable flamethrower was against the formidable defenses at Tarawa in November 1943. The Marines pioneered the use of Ronson-equipped M-3 Stuart tanks in the Marianas. These were known as SATAN flame tanks. Though effective, they lacked the armour to safely engage fortifications and were phased out in favour of the better-armoured M4 Sherman tanks. USMC Flamethrower Shermans were produced at Schofield Barracks by Seabees attached to the Chemical Warfare Service under Col. Unmacht. CWS designated M4s with "CWS-POA-H" for "Chemical Warfare Service Pacific Ocean Area, Hawaii" plus a flamethrower number. The Marines had previously deployed large Navy flamethrowers mounted on LVT-4 AMTRACs at Peleliu. Late in the war, both services operated LVT-4 and −5 amphibious flametanks in limited numbers. Both the Army and the Marines still used their infantry-portable systems, despite the arrival of adapted Sherman tanks with the Ronson system (cf. flame tanks).
In cases where the Japanese were entrenched in deep caves, the flames often consumed the available oxygen, suffocating the occupants. Many Japanese troops interviewed post war said they were terrified more by flamethrowers than any other American weapon. Flamethrower operators were often the first U.S. troops targeted.
Soviet Union
The FOG-1 and −2 flamethrowers were stationary devices used in defense. They could also be categorized as a projecting incendiary mine. The FOG had only one cylinder of fuel, which was compressed using an explosive charge and projected through a nozzle. The November 1944 issue of the US War Department Intelligence Bulletin refers to these "Fougasse flame throwers" being used in the Soviet defense of Stalingrad. The FOG-1 was directly copied by the Germans as the Abwehrflammenwerfer 42.
Unlike the flamethrowers of the other powers during World War II, the Soviets were the only ones to consciously attempt to camouflage their infantry flamethrowers. With the ROKS-2 flamethrower this was done by disguising the flame projector as a standard-issue rifle, such as the Mosin–Nagant, and the fuel tanks as a standard infantryman's rucksack. This was to try to stop the flamethrower operator from being specifically targeted by enemy fire. This "rifle" had a working action which was used to cycle blank igniter cartridges.
1945–1980
US military
The United States Marines used flamethrowers in the Korean and Vietnam Wars. The M132 armored flamethrower, an M113 armored personnel carrier with a mounted flamethrower, was successfully used in the conflict.
Flamethrowers have not been in the U.S. arsenal since 1978, when the Department of Defense unilaterally stopped using them — the last American infantry flamethrower was the Vietnam-era M9-7. They have been deemed of questionable effectiveness in modern combat, though some have made the case for their tactical employment.
U.S. army flamethrowers developed up to the M9 model. In the M9 the propellant tank is a sphere below the left fuel tank and does not project backwards.
Israel
Crude homemade flamethrowers were built by Irgun in the late 1940s.
China
The PLA adopted the Type 74 flamethrower, a copy of the Soviet LPO-50. It was later used in the Sino-Vietnamese War.
Vietnam
LPO-50 and Type 74 flamethrowers were used by NVA forces during the Vietnam War.
Iraq
The Iraqi army used LPO-50 flamethrowers during the Iran-Iraq War.
Post-1980s
Non-flamethrower incendiary weapons remain in modern military arsenals. Thermobaric weapons may have been fielded in Afghanistan by the United States (2008) and have been used by Russia in Ukraine (2022). The U.S. and U.S.S.R. both developed a rocket launcher specifically for the deployment of incendiary munitions, respectively the M202 FLASH and the RPO "Rys" ancestor of the RPO-A Shmel.
Vietnam
The Type 74 is still being used by the Vietnamese Army.
French assault on Ouvéa cave (1988)
On 22 April 1988, Kanak rebels took 36 French hostages in Ouvéa island, New Caledonia, most of them gendarms and military personnel. On 5 May, after weeks of fruitless negotiations, a team of gendarms and paratroopers from the French Army launched a rescue operation. During the assault, a rebel machine gun position was neutralized using a flamethrower. All hostages were eventually set free.
Provisional IRA
In 1981 the FBI foiled an attempt by New York-based gun runner George Harrison to smuggle an M2 flamethrower to Ireland for the IRA. In the last stages of the Troubles, during the mid-1980s, the IRA smuggled in Soviet LPO-50 military flamethrowers (supplied to them by the Libyan government) into Northern Ireland. An IRA team riding on an improvised armoured truck used one of these flamethrowers, among other weapons, to storm a British Army permanent checkpoint in Derryard, near Rosslea, on 13 December 1989. Some months later, on 4 March 1990, the IRA attacked an RUC station in Stewartstown, County Tyrone, using an improvised flamethrower consisting of a manure-spreader towed by a tractor to spray of a petrol/diesel mix to engulf the base in flames, and then opened fire with rifles and an anti-tank rocket launcher. Another IRA unit carried two attacks in less than a year with another improvised flamethrower towed by a tractor on a British Army watchtower, the Borucki sangar, in Crossmaglen, County Armagh, also in the early 1990s. The first incident occurred on 12 December 1992, when the bunker was manned by Scots Guards, and the second on 12 November 1993. The device used as launcher was also a manure spreader, which doused the facility with fuel, ignited few seconds later by a small explosion. In the 1993 action, a nine-metre-high fireball engulfed the tower for seven minutes. The four Grenadier Guards inside the outpost were rescued by a Saxon armoured vehicle. Incendiary improvised devices were also proven by the republican paramilitaries, such as an IRA grenade attack on a British Army patrol on 4 April 1993 in Carrickmore, County Tyrone; the device consisted of of semtex and of petrol; the bomb exploded, but the fuel failed to ignite. A soldier was thrown several metres across the road by the blast.
Brazil
As of 2003 the locally made Hydroar T1M1 flamethrower was still being used by the 1º Batalhão de Forças Especiais.
China
The Chinese Army still issues the Type 74 flamethrower.
During an operation to hunt down the militant group responsible for the 2015 Aksu colliery attack, after using tear gas and flash grenades to no avail, Chinese paramilitary forces resorted to flamethrowers to root out suspected militants who were hiding in a cave.
Iraq conflict
Captain Shannon Johnson requested colonel John A. Toolan to supply his company with flamethrowers during the Battle of Fallujah; however no flamethrowers were issued.
The People's Mujahedin of Iran claimed flamethrowers were used in the 2013 Camp Ashraf massacre
Italy
As of 2012 the locally made T-148/B flamethrower was still being used by the Italian Army
Myanmar
Flamethrowers have been used by the Tatmadaw during attacks on Rohingya villages during the Rohingya genocide.
Russo-Ukrainian War
On 8 February 2017, separatist leader Mikhail 'Givi' Tolstykh was killed when an RPO-A Shmel rocket-assisted flamethrower was fired at his office in Donetsk.
On 21 November 2022, nine months into the Russian invasion of Ukraine, Russian sources claim that artillery and "heavy flamethrowers" were employed against a Ukrainian concentration of troops near Kupyansk, Kharkiv Oblast. Russian sources use the term "heavy flamethrowers" to describe TOS-1 multiple thermobaric rocket launchers.
International law
Despite some assertions, flamethrowers are not generally banned. However the United Nations Protocol on Incendiary Weapons forbids the use of incendiary weapons (including flamethrowers) against civilians. It also forbids their use against forests unless they are used to conceal combatants or other military objectives.
Owning a personal flamethrower
In the United States, private ownership of a flamethrower is not restricted by federal law, because a flamethrower is a tool, not a firearm. Flamethrowers are legal in 48 states and restricted in California and Maryland.
In California, unlicensed possession of a flame-throwing device—statutorily defined as "any non-stationary and transportable device designed or intended to emit or propel a burning stream of combustible or flammable liquid a distance of at least " H&W 12750 (a)—is a misdemeanor punishable with a county jail term not exceeding one year or with a fine not exceeding $10,000 (CA H&W 12761). Licenses to use flamethrowers are issued by the state fire marshal, and they may use any criteria for issuing or not issuing that license which is deemed fit, but must publish those criteria in the California Code of Regulations, Title 11, Section 970 et seq.
In the United Kingdom, flamethrowers are "prohibited weapons" under section 5(1)(b) of the Firearms Act 1968 and article 45(1)(f) of the Firearms (Northern Ireland) Order 2004 and possession of a flamethrower would carry a sentence of up to ten years' imprisonment. On 16 June 1994, a man attacked school pupils at Sullivan Upper School, just outside Belfast, with a home-made flamethrower.
A South African inventor brought the Blaster car-mounted flamethrower to market in 1998 as a security device to defend against carjackers. It has since been discontinued, with the inventor moving on to pocket-sized self-defence flamethrowers.
Elon Musk, CEO of Tesla, Inc. and owner of SpaceX, developed a "not a flamethrower" for public sale through his business, The Boring Company, selling 20,000 units. This device uses liquid propane gas rather than a stream of gasoline, making it more akin to a torch, like those commonly available at home and garden centers.
Other uses
Flamethrowers are occasionally used for igniting controlled burns for land management and agriculture. For example, in the production of sugar cane, where canebrakes are burned to get rid of the dry dead leaves which clog harvesters, and incidentally kill any lurking venomous snakes. More commonly, a driptorch or a flare (fusee) is used.
U.S. troops allegedly used flamethrowers on the streets of Washington, D.C. (mentioned in a December 1998 article in the San Francisco Flier), as one of several clearance methods used for the surprisingly large amount of snow that fell before the presidential inauguration of John F. Kennedy. A history article on the U.S. Army Corps of Engineers notes, "In the end, the task force employed hundreds of dump trucks, front-end loaders, sanders, plows, rotaries, and allegedly flamethrowers to clear the way".
Flamethrowers were employed by U.S. combat engineers during the Iraq War; they were used to clear brush and eliminate hiding spots for insurgents
A squad armed with backpack flamethrowers had an important part in the 2012 Summer Paralympics closing ceremony. They had one big tank each. They could make a flame about long.
In April 2014 it was reported by South Korea's Chosun Ilbo newspaper without confirmation that a North Korean government official, O Sang-Hon, Deputy Minister at the Ministry of Public Security, was executed by flamethrower.
In August 2016 it was reported that the Islamic State used flamethrowers to execute six of its commanders in Tal Afar as punishment for their attempt to escape to Syria
It has been known for police to fill a "flamethrower", not with flammable liquid, but rather with tear gas dissolved in water as a riot-control device; see Converted Flamethrower 40.
| Technology | Incendiary weapons | null |
70950 | https://en.wikipedia.org/wiki/Clove | Clove | Cloves are the aromatic flower buds of a tree in the family Myrtaceae, Syzygium aromaticum (). They are native to the Maluku Islands, or Moluccas, in Indonesia, and are commonly used as a spice, flavoring, or fragrance in consumer products, such as toothpaste, soaps, or cosmetics. Cloves are available throughout the year owing to different harvest seasons across various countries.
Etymology
The word clove, first used in English in the 15th century, derives via Middle English , Anglo-French clowes de gilofre and Old French , from the Latin word "nail". The related English word gillyflower, originally meaning "clove", derives via said Old French and Latin , from the Greek "clove", literally "nut leaf".
Description
The clove tree is an evergreen that grows up to tall, with large leaves and crimson flowers grouped in terminal clusters. The flower buds initially have a pale hue, gradually turn green, then transition to a bright red when ready for harvest. Cloves are harvested at long, and consist of a long calyx that terminates in four spreading sepals, and four unopened petals that form a small central ball.
Clove stalks are slender stems of the inflorescence axis that show opposite decussate branching. Externally, they are brownish, rough, and irregularly wrinkled longitudinally with short fracture and dry, woody texture. Mother cloves (anthophylli) are the ripe fruits of cloves that are ovoid, brown berries, unilocular and one-seeded.
Blown cloves are expanded flowers from which both corollae and stamens have been detached. Exhausted cloves have most or all the oil removed by distillation. They yield no oil and are darker in color.
Uses
Cloves are used in the cuisine of Asian, African, Mediterranean, and the Near and Middle East countries, lending flavor to meats (such as baked ham), curries, and marinades, as well as fruit (such as apples, pears, and rhubarb). Cloves may be used to give aromatic and flavor qualities to hot beverages, often combined with other ingredients such as lemon and sugar. They are a common element in spice blends (as part of the Malay rempah empat beradik –"four sibling spices"– besides cinnamon, cardamom and star anise for example), including pumpkin pie spice and speculaas spices.
In Mexican cuisine, cloves are best known as clavos de olor, and often accompany cumin and cinnamon. They are also used in Peruvian cuisine, in a wide variety of dishes such as carapulcra and arroz con leche.
A major component of clove's taste is imparted by the chemical eugenol, and the quantity of the spice required is typically small. It pairs well with cinnamon, allspice, vanilla, red wine, basil, onion, citrus peel, star anise, and peppercorns.
Non-culinary uses
It is often added to betel quids to enhance aroma while chewing. The spice is used in a type of cigarette called kretek in Indonesia. Clove cigarettes were smoked throughout Europe, Asia, and the United States. Clove cigarettes are currently classified in the United States as cigars, the result of a ban on flavored cigarettes in September 2009.
Clove essential oil may be used to inhibit mold growth on various types of foods. In addition to these non-culinary uses of clove, it can be used to protect wood in a system for cultural heritage conservation, and showed the efficacy of clove essential oil to be higher than a boron-based wood preservative. Cloves can be used to make a fragrant pomander when combined with an orange. When given as a gift in Victorian England, such a pomander indicated warmth of feeling.
Adverse effects and potential uses
The use of clove for any medicinal purpose has not been approved by the US Food and Drug Administration, and its use may cause adverse effects if taken orally by people with liver disease, blood clotting and immune system disorders, or food allergies.
Cloves are used in traditional medicine as an essential oil, which is intended to be an anodyne (analgesic) mainly for dental emergencies. There is evidence that clove oil containing eugenol is effective for toothache pain and other types of pain. Clove essential oil may prevent the growth of Enterococcus faecalis bacteria which may be present in an unsuccessful root canal treatment.
One review reported the efficacy of eugenol combined with zinc oxide as an analgesic for alveolar osteitis. Studies to determine its effectiveness for fever reduction, as a mosquito repellent, and to prevent premature ejaculation have been inconclusive. It remains unproven whether blood sugar levels are reduced by cloves or clove oil. The essential oil may be used in aromatherapy.
History
Until the colonial era, cloves only grew on a few islands in the Moluccas (historically called the Spice Islands), including Bacan, Makian, Moti, Ternate, and Tidore.
Cloves were first traded by the Austronesian peoples in the Austronesian maritime trade network (which began around 1500 BC, later becoming the Maritime Silk Road and part of the Spice Trade). The first notable example of modern clove farming developed on the east coast of Madagascar, and is cultivated in three separate ways, a monoculture, agricultural parklands, and agroforestry systems.
Archaeologist Giorgio Buccellati found cloves in Terqa, Syria, in a burned-down house which was dated to 1720 BC during the kingdom of Khana. This was the first evidence of cloves being used in the west before Roman times. The discovery was first reported in 1978. They reached Rome by the first century AD.
Other archeological finds of cloves include: At the Batujaya site a single clove was found in a waterlogged layer dating to between the 100s BC to 200s BC corresponding to the Buni culture phase of this site. A study at the site of Óc Eo in the Mekong Delta of Vietnam found starch grains of cloves on stone implements used in food processing. This site was occupied from the first to eighth century BC, and was a trading center for the kingdom of Funnan. Two cloves were found during archaeological excavations at the Sri Lankan city of Mantai dated to around 900–1100 AD.
Cloves are mentioned in the Ramayana. Cloves are also mentioned in the Charaka Samhita. One of the earliest examples of literary evidence of cloves in China is from the book the Han Guan Yi (Etiquettes of the Officialdom of the Han Dynasty, dating to around 200 BC). The book states a rule that ministers should suck cloves to sweeten their breath before speaking to the emperor. From Chinese records during the Song Dynasty (960 to 1279 AD) cloves were primarily imported by private ventures, called Merchant Shipping Offices, who bought goods from middlemen in the Austronesian polities of Java, Srivijaya, Champa, and Butuan. During the Yuan dynasty (1271 to 1368 AD) Chinese merchants began sending ships directly to the Moluccas to trade for cloves, and other spices.
In the Western Classical literature cloves are mentioned by Pliny the Elders' Natural History. Dioscorides mentions cloves in his book De materia medica. The Liber Pontifcalis records an endowment made by Passinopolis under Pope Sylvester I. This endowment included an Egyptian estate, its annual revenues, 150 libra (around 50 kg or 108 lb) of cloves, and other amounts of spices and papyrus. Cosmas Indicopleustis in his book Topographia Christiana outlined his travels to Sri Lanka, and recounted that the Indians said that cloves, among other products, came in from unspecified places along sea trade routes.
Cloves were also present in records in China, Sri Lanka, Southern India, Persia, and Oman by around the third century to second century BC. These mentions of "cloves" reported in China, South Asia, and the Middle East come from before the establishment of Southeast Asian maritime trade. But all of these are misidentifications that referred to other plants (like cassia buds, cinnamon, or nutmeg); or are imports from Maritime Southeast Asia mistakenly identified as being natively produced in these regions.
Archaeologists recovered the earliest known example of macro-botanical cloves in northwest Europe from the wreck of the Danish-Norwegian flagship, Gribshunden. The ship sank near Ronneby, Sweden in June 1495 while King Hans was sailing to political summit at Kalmar, Sweden. Exotic luxuries including cloves, ginger, peppercorns, and saffron would have impressed the noblemen and high church officials at the summit.
Cloves have been documented in the burial practices of Europeans from the late middle ages into the early modern period. During renovations on the Grote Kerk of Breda a tomb was rediscovered that was used between 1475 and 1526 AD by eight members of the house of Nassau. These burials had to be moved, but before being re-interred these burials were studied for botanical remains. The burial of Cimberga van Baden contained pollen from cloves. The Dutch Physician Pieter Van Foreest wrote down multiple recipes for embalming some of which included cloves. One of these recipes he wrote down was that used by his fellow physicians Spierinck and Goethals. An embalming jar associated with Vittoria della Rovere also contained clove pollen. This probably came from her ingestion of clove oil as a medicine in her final days. When burials needed to be moved from the church of Saint Germain in Flers, France they were also studied for botanical remains. The body and coffin of Philippe René de la Motte Ango, count of Flers who was buried in 1737 AD contained whole cloves.
During the colonial era, cloves were traded like oil, with an enforced limit on exportation. As the Dutch East India Company consolidated its control of the spice trade in the 17th century, they sought to gain a monopoly in cloves as they had in nutmeg. However, "unlike nutmeg and mace, which were limited to the minute Bandas, clove trees grew all over the Moluccas, and the trade in cloves was beyond the limited policing powers of the corporation". One clove tree named Afo that experts believe is the oldest in the world on Ternate may be 350–400 years old. Tourists are told that seedlings from this very tree were stolen by a Frenchman named Pierre Poivre in 1770, transferred to the Isle de France (Mauritius), and then later to Zanzibar, which was once the world's largest producer of cloves.
Current leaders in clove production are Indonesia, Madagascar, Tanzania, Sri Lanka, and Comoros. Indonesia is the largest clove producer, but only about 10-15% of its cloves production is exported, and domestic shortfalls must sometimes be filled with imports from Madagascar. The modern province of Maluku remains the largest source of cloves in Indonesia with around 15% of national production, although provinces comprising the island of Sulawesi produced over 40% collectively.
Phytochemicals
Eugenol comprises 72–90% of the essential oil extracted from cloves, and is the compound most responsible for clove aroma. Complete extraction occurs at 80 minutes in pressurized water at . Ultrasound-assisted and microwave-assisted extraction methods provide more rapid extraction rates with lower energy costs.
Other phytochemicals of clove oil include acetyl eugenol, beta-caryophyllene, vanillin, crategolic acid, tannins, such as bicornin, gallotannic acid, methyl salicylate, the flavonoids eugenin, kaempferol, rhamnetin, and eugenitin, triterpenoids such as oleanolic acid, stigmasterol, and campesterol and several sesquiterpenes. Although eugenol has not been classified for its potential toxicity, it was shown to be toxic to test organisms in concentrations of 50, 75, and 100 mg per liter.
Gallery
| Biology and health sciences | Myrtales | null |
70980 | https://en.wikipedia.org/wiki/Appendicitis | Appendicitis | Appendicitis is inflammation of the appendix. Symptoms commonly include right lower abdominal pain, nausea, vomiting, and decreased appetite. However, approximately 40% of people do not have these typical symptoms. Severe complications of a ruptured appendix include widespread, painful inflammation of the inner lining of the abdominal wall and sepsis.
Appendicitis is primarily caused by a blockage of the hollow portion in the appendix. This blockage typically results from a faecolith, a calcified "stone" made of feces. Some studies show a correlation between appendicoliths and disease severity. Other factors such as inflamed lymphoid tissue from a viral infection, intestinal parasites, gallstone, or tumors may also lead to this blockage. When the appendix becomes blocked, it experiences increased pressure, reduced blood flow, and bacterial growth, resulting in inflammation. This combination of factors causes tissue injury and, ultimately, tissue death. If this process is left untreated, it can lead to the appendix rupturing, which releases bacteria into the abdominal cavity, potentially leading to severe complications.
The diagnosis of appendicitis is largely based on the person's signs and symptoms. In cases where the diagnosis is unclear, close observation, medical imaging, and laboratory tests can be helpful. The two most commonly used imaging tests for diagnosing appendicitis are ultrasound and computed tomography (CT scan). CT scan is more accurate than ultrasound in detecting acute appendicitis. However, ultrasound may be preferred as the first imaging test in children and pregnant women because of the risks associated with radiation exposure from CT scans. Although ultrasound may aid in diagnosis, its main role is in identifying important differentials, such as ovarian pathology in females or mesenteric adenitis in children.
The standard treatment for acute appendicitis involves the surgical removal of the inflamed appendix. This procedure can be performed either through an open incision in the abdomen (laparotomy) or using minimally invasive techniques with small incisions and cameras (laparoscopy). Surgery is essential to reduce the risk of complications or potential death associated with the rupture of the appendix. Antibiotics may be equally effective in certain cases of non-ruptured appendicitis, but 31% will undergo appendectomy within one year. It is one of the most common and significant causes of sudden abdominal pain. In 2015, approximately 11.6 million cases of appendicitis were reported, resulting in around 50,100 deaths worldwide. In the United States, appendicitis is one of the most common causes of sudden abdominal pain requiring surgery. Annually, more than 300,000 individuals in the United States undergo surgical removal of their appendix.
Signs and symptoms
The presentation of acute appendicitis includes acute abdominal pain, nausea, vomiting, and fever. As the appendix becomes more swollen and inflamed, it begins to irritate the adjoining abdominal wall. This leads the pain to localize at the right lower quadrant. This classic migration of pain may not appear in children under three years. This pain can be triggered by a sharp pain feeling. Pain from appendicitis may begin as dull pain around the navel. After several hours, the pain usually migrates towards the right lower quadrant, where it becomes localized. Symptoms include localized findings in the right iliac fossa. The abdominal wall becomes very sensitive to gentle pressure (palpation). There is pain in the sudden release of deep tension in the lower abdomen (Blumberg's sign). If the appendix is retrocecal (localized behind the cecum), even deep pressure in the right lower quadrant may fail to elicit tenderness (silent appendix). This is because the cecum, distended with gas, protects the inflamed appendix from pressure. Similarly, if the appendix lies entirely within the pelvis, there is typically a complete absence of abdominal rigidity. In such cases, a digital rectal examination elicits tenderness in the rectovesical pouch. Coughing causes point tenderness in this area (McBurney's point), called Dunphy's sign.
Causes
Acute appendicitis seems to be the result of a primary obstruction of the appendix. Once this obstruction occurs, the appendix becomes filled with mucus and swells. This continued production of mucus leads to increased pressures within the lumen and the walls of the appendix. The increased pressure results in thrombosis and occlusion of the small vessels, and stasis of lymphatic flow. At this point, spontaneous recovery rarely occurs. As the occlusion of blood vessels progresses, the appendix becomes ischemic and then necrotic. As bacteria begin to leak out through the dying walls, pus forms within and around the appendix (suppuration). The result is appendiceal rupture (a 'burst appendix') causing peritonitis, which may lead to sepsis and in rare cases, death. These events are responsible for the slowly evolving abdominal pain and other commonly associated symptoms.
The causative agents include bezoars, foreign bodies, trauma, lymphadenitis and, most commonly, calcified fecal deposits that are known as appendicoliths or fecaliths. The occurrence of obstructing fecaliths has attracted attention since their presence in people with appendicitis is higher in developed than in developing countries. In addition, an appendiceal fecalith is commonly associated with complicated appendicitis. Fecal stasis and arrest may play a role, as demonstrated by people with acute appendicitis having fewer bowel movements per week compared with healthy controls.
The occurrence of a fecalith in the appendix was thought to be attributed to a right-sided fecal retention reservoir in the colon and a prolonged transit time. However, a prolonged transit time was not observed in subsequent studies. Diverticular disease and adenomatous polyps was historically unknown and colon cancer was exceedingly rare in communities where appendicitis itself was rare or absent, such as various African communities. Studies have implicated a transition to a Western diet lower in fiber in rising frequencies of appendicitis as well as the other aforementioned colonic diseases in these communities. And acute appendicitis has been shown to occur antecedent to cancer in the colon and rectum. Several studies offer evidence that a low fiber intake is involved in the pathogenesis of appendicitis. This low intake of dietary fiber is in accordance with the occurrence of a right-sided fecal reservoir and the fact that dietary fiber reduces transit time.
Diagnosis
The physician will ask questions to get the health history, assess the patient's symptoms, do a complete physical exam, and order both laboratory and imaging tests. Appendicitis symptoms fall into two categories, typical and atypical.
Typical appendicitis is characterized by a migratory right iliac fossa pain associated with nausea, and anorexia, which can occur with or without vomiting and localized muscle stiffness/ generalized guarding. It is possible the pain could localize to the left lower quadrant in people with situs inversus totalis. The combination of migrated umbilical pain to the right lower quadrant, loss of appetite for food, nausea, unsustained vomiting, and mild fever is classic.
Atypical histories lack this typical progression and may include pain in the right lower quadrant as an initial symptom. Irritation of the peritoneum (inside lining of the abdominal wall) can lead to increased pain on movement, or jolting, for example going over speed bumps. Atypical histories often require imaging with ultrasound or CT scanning.
Signs
During the early stages of appendicitis diagnosis, it is common for physical exams to present inconspicuous findings. Signs of inflammation become noticeable as the disease progresses. These signs may include
Aure-Rozanova's sign: Increased pain on palpation with a finger in the right inferior lumbar triangle (can be a positive Blumberg's sign).
Bartomier-Michelson's sign: Increased pain on palpation at the right iliac region as the person being examined lies on their left side compared to when they lie on their back.
Dunphy's sign: Increased pain in the right lower quadrant by coughing.
Hamburger sign: The patient refuses to eat (anorexia is 80% sensitive for appendicitis)
Kocher's sign (Kosher's sign): From the person's medical history, the start of pain in the umbilical region with a subsequent shift to the right iliac region.
Massouh's sign: Developed in and popular in southwest England, the examiner performs a firm swish with their index and middle finger across the abdomen from the xiphoid process to the left and the right iliac fossa.
Obturator sign: The person being evaluated lies on her or his back with the hip and knee both flexed at ninety degrees. The examiner holds the person's ankle with one hand and knee with the other hand. The examiner rotates the hip by moving the person's ankle away from their body while allowing the knee to move only inward. A positive test is pain with internal rotation of the hip.
Psoas sign, also known as "Obraztsova's sign", is right lower-quadrant pain that is produced with either the passive extension of the right hip or by the active flexion of the person's right hip while supine. The pain that is elicited is due to inflammation of the peritoneum overlying the iliopsoas muscles and inflammation of the psoas muscles themselves. Straightening out the leg causes pain because it stretches these muscles while flexing the hip activates the iliopsoas and causes pain.
Rovsing's sign: Pain in the lower right abdominal quadrant with continuous deep palpation starting from the left iliac fossa upwards (counterclockwise along the colon). The thought is there will be increased pressure around the appendix by pushing bowel contents and air toward the ileocaecal valve provoking right-sided abdominal pain.
Rosenstein's sign (Sitkovsky's sign): Increased pain in the right iliac region as the person is being examined lies on their left side.
Perman's sign: In acute appendicitis palpation in the left iliac fossa may produce pain in the right iliac fossa.
Laboratory tests
While there is no laboratory test specific for appendicitis, a complete blood count (CBC) is done to check for signs of infection or inflammation. Although 70–90 percent of people with appendicitis may have an elevated white blood cell (WBC) count, many other abdominal and pelvic conditions can cause the WBC count to be elevated. However, a high WBC count may not alone represent a solid indicator of appendicitis but rather an inflammation but the neutrophil ratio was more sensitive and specific for acute appendicitis.
In children, neutrophil-lymphocyte ratio (NLR)
demonstrates a high degree of accuracy in the diagnosis of acute appendicitis and distinguishes complicated appendicitis from the simple one.
75–78 percent of the patients have neutrophilia. Delta-neutrophil index (DNI) is a valuable parameter that helps in the diagnosis of histologically normal appendicitis and distinguishing between simple and complicated appendicitis.
A C-reactive protein (CRP) blood test will be ordered by the doctor to find out if there are any further causes of inflammation. The C-reactive protein/albumin (CRP/ALB) ratio can be a reliable predictor of complicated appendicitis.
The urinalysis is important for ruling out a urinary tract infection as the cause of abdominal pain. The presence of more than 20 WBC per high-power field in the urine is more suggestive of a urinary tract disorder.
If the patient is a female, a pregnancy test will be ordered.
Imaging
In children, the clinical examination is important to determine which children with abdominal pain should receive immediate surgical consultation and which should receive diagnostic imaging. Because of the health risks of exposing children to radiation, ultrasound is the preferred first choice with CT scan being a legitimate follow-up if the ultrasound is inconclusive. CT scan is more accurate than ultrasound for the diagnosis of appendicitis in adults and adolescents. CT scan has a sensitivity of 94%, specificity of 95%. Ultrasonography had an overall sensitivity of 86%, a specificity of 81%.
Ultrasound
Abdominal ultrasonography, preferably with doppler sonography, is useful to detect appendicitis, especially in children. Ultrasound can show the free fluid collection in the right iliac fossa, along with a visible appendix with increased blood flow when using color Doppler, and noncompressibility of the appendix, as it is essentially a walled-off abscess. Other secondary sonographic signs of acute appendicitis include the presence of echogenic mesenteric fat surrounding the appendix and the acoustic shadowing of an appendicolith. In some cases (approximately 5%), ultrasonography of the iliac fossa does not reveal any abnormalities despite the presence of appendicitis. This false-negative finding is especially true of early appendicitis before the appendix has become significantly distended. Also, false-negative findings are more common in adults where larger amounts of fat and bowel gas make visualizing the appendix technically difficult. Despite these limitations, sonographic imaging with experienced hands can often distinguish between appendicitis and other diseases with similar symptoms. Some of these conditions include inflammation of lymph nodes near the appendix or pain originating from other pelvic organs such as the ovaries or Fallopian tubes. Ultrasounds may be either done by the radiology department or by the emergency physician.
Computed tomography
Where it is readily available, computed tomography (CT) has become frequently used, especially in people whose diagnosis is not obvious on history and physical examination. Although some concerns about interpretation are identified, a 2019 Cochrane review found that the sensitivity and specificity of CT for the diagnosis of acute appendicitis in adults was high. Concerns about radiation tend to limit use of CT in pregnant women and in children, especially with the increasingly widespread usage of MRI.
The accurate diagnosis of appendicitis is multi-tiered, with the size of the appendix having the strongest positive predictive value, while indirect features can either increase or decrease sensitivity and specificity. A size of over 6 mm is both 95% sensitive and specific for appendicitis.
However, because the appendix can be filled with fecal material, causing intraluminal distention, this criterion has shown limited utility in more recent meta-analyses. This is as opposed to ultrasound, in which the wall of the appendix can be more easily distinguished from intraluminal feces. In such scenarios, ancillary features such as increased wall enhancement as compared to adjacent bowel and inflammation of the surrounding fat, or fat stranding, can be supportive of the diagnosis. However, their absence does not preclude it. In severe cases with perforation, an adjacent phlegmon or abscess can be seen. Dense fluid layering in the pelvis can also result, related to either pus or enteric spillage. When patients are thin or younger, the relative absence of fat can make the appendix and surrounding fat stranding difficult to see.
Magnetic resonance imaging
Magnetic resonance imaging (MRI) use has become increasingly common for diagnosis of appendicitis in children and pregnant patients due to the radiation dosage that, while of nearly negligible risk in healthy adults, can be harmful to children or the developing baby. In pregnancy, it is more useful during the second and third trimester, particularly as the enlargening uterus displaces the appendix, making it difficult to find by ultrasound. The periappendiceal stranding that is reflected on CT by fat stranding on MRI appears as an increased fluid signal on T2 weighted sequences. First-trimester pregnancies are usually not candidates for MRI, as the fetus is still undergoing organogenesis, and there are no long-term studies to date regarding its potential risks or side effects.
X-ray
In general, plain abdominal radiography (PAR) is not useful in making the diagnosis of appendicitis and should not be routinely obtained from a person being evaluated for appendicitis. Plain abdominal films may be useful for the detection of ureteral calculi, small bowel obstruction, or perforated ulcer, but these conditions are rarely confused with appendicitis. An opaque fecalith can be identified in the right lower quadrant in fewer than 5% of people being evaluated for appendicitis. A barium enema has proven to be a poor diagnostic tool for appendicitis. While failure of the appendix to fill during a barium enema has been associated with appendicitis, up to 20% of normal appendices do not fill.
Scoring systems
Several scoring systems have been developed to try to identify people who are likely to have appendicitis. The performance of scores such as the Alvarado score and the Pediatric Appendicitis Score, however, are variable.
The Alvarado score is the most known scoring system. A score below 5 suggests against a diagnosis of appendicitis, whereas a score of 7 or more is predictive of acute appendicitis. In a person with an equivocal score of 5 or 6, a CT scan or ultrasound exam may be used to reduce the rate of negative appendectomy.
Pathology
Even for clinically certain appendicitis, routine histopathology examination of appendectomy specimens is of value for identifying unsuspected pathologies requiring further postoperative management. Notably, appendix cancer is found incidentally in about 1% of appendectomy specimens.
Pathology diagnosis of appendicitis can be made by detecting a neutrophilic infiltrate of the muscularis propria.
Periappendicitis (inflammation of tissues around the appendix) is often found in conjunction with other abdominal pathology.
Differential diagnosis
Children: Gastroenteritis, mesenteric adenitis, Meckel's diverticulitis, intussusception, Henoch–Schönlein purpura, lobar pneumonia, urinary tract infection (abdominal pain in the absence of other symptoms can occur in children with UTI), new-onset Crohn's disease or ulcerative colitis, pancreatitis, and abdominal trauma from child abuse; distal intestinal obstruction syndrome in children with cystic fibrosis; typhlitis in children with leukemia.
Women: A pregnancy test is important for all women of childbearing age since an ectopic pregnancy can have signs and symptoms similar to those of appendicitis. Other obstetrical/ gynecological causes of similar abdominal pain in women include pelvic inflammatory disease, ovarian torsion, menarche, dysmenorrhea, endometriosis, and Mittelschmerz (the passing of an egg in the ovaries approximately two weeks before menstruation).
Men: testicular torsion
Adults: new-onset Crohn disease, ulcerative colitis, regional enteritis, cholecystitis, renal colic, perforated peptic ulcer, pancreatitis, rectus sheath hematoma and epiploic appendagitis.
Elderly: diverticulitis, intestinal obstruction, colonic carcinoma, mesenteric ischemia, leaking aortic aneurysm.
The term "" is used to describe a condition mimicking appendicitis. It can be associated with Yersinia enterocolitica.
Management
Acute appendicitis is typically managed by surgery. While antibiotics are safe and effective for treating uncomplicated appendicitis, 31% of people had a recurrence within a year and required an eventual appendectomy. Antibiotics are less effective if an appendicolith is present. While 51% of patients who were treated with antibiotics did not need an appendectomy three years after treatment, the cost effectiveness of surgery versus antibiotics is unclear
Using antibiotics to prevent potential postoperative complications in emergency appendectomy procedures is recommended, and the antibiotics are effective when given to a person before, during, or after surgery.
Pain
Pain medications (such as morphine) do not appear to affect the accuracy of the clinical diagnosis of appendicitis and therefore should be given early in the patient's care. Historically there were concerns among some general surgeons that analgesics would affect the clinical exam in children, and some recommended that they not be given until the surgeon was able to examine the person.
Surgery
The surgical procedure for the removal of the appendix is called an appendectomy. Appendectomy can be performed through open or laparoscopic surgery. Laparoscopic appendectomy has several advantages over open appendectomy as an intervention for acute appendicitis.
Open appendectomy
For over a century, laparotomy (open appendectomy) was the standard treatment for acute appendicitis. This procedure consists of the removal of the infected appendix through a single large incision in the lower right area of the abdomen. The incision in a laparotomy is usually long.
During an open appendectomy, the person with suspected appendicitis is placed under general anesthesia to keep the muscles completely relaxed and to keep the person unconscious. The incision is two to three inches (76 mm) long, and it is made in the right lower abdomen, several inches above the hip bone. Once the incision opens the abdomen cavity, and the appendix is identified, the surgeon removes the infected tissue and cuts the appendix from the surrounding tissue. After careful and close inspection of the infected area, and ensuring there are no signs that surrounding tissues are damaged or infected. In case of complicated appendicitis managed by an emergency open appendectomy, abdominal drainage (a temporary tube from the abdomen to the outside to avoid abscess formation) may be inserted, but this may increase the hospital stay. The surgeon will start closing the incision. This means sewing the muscles and using surgical staples or stitches to close the skin up. To prevent infections, the incision is covered with a sterile bandage or surgical adhesive.
Laparoscopic appendectomy
Laparoscopic appendectomy was introduced in 1983 and has become an increasingly prevalent intervention for acute appendicitis. This surgical procedure consists of making three to four incisions in the abdomen, each long. This type of appendectomy is made by inserting a special surgical tool called a laparoscope into one of the incisions. The laparoscope is connected to a monitor outside the person's body, and it is designed to help the surgeon inspect the infected area in the abdomen. The other two incisions are made for the specific removal of the appendix by using surgical instruments. Laparoscopic surgery requires general anesthesia, and it can last up to two hours. Laparoscopic appendectomy has several advantages over open appendectomy, including a shorter post-operative recovery, less post-operative pain, and a lower superficial surgical site infection rate. However, the occurrence of an intra-abdominal abscess is almost three times more prevalent in laparoscopic appendectomy than open appendectomy.
Laparoscopic-assisted transumbilical appendectomy
In pediatric patients, the high mobility of the cecum allows externalization of the appendix through the umbilicus, and the entire procedure can be performed with a single incision. Laparoscopic-assisted transumbilical appendectomy is a relatively recent technique but with a long published series and very good surgical and aesthetic results.
Pre-surgery
The treatment begins by keeping the person who will be having surgery from eating or drinking for a given period, usually overnight. An intravenous drip is used to hydrate the person who will be having surgery. Antibiotics given intravenously such as cefuroxime and metronidazole may be administered early to help kill bacteria and thus reduce the spread of infection in the abdomen and postoperative complications in the abdomen or wound. Equivocal cases may become more difficult to assess with antibiotic treatment and benefit from serial examinations. If the stomach is empty (no food in the past six hours), general anaesthesia is usually used. Otherwise, spinal anaesthesia may be used.
Once the decision to perform an appendectomy has been made, the preparation procedure takes approximately one to two hours. Meanwhile, the surgeon will explain the surgery procedure and will present the risks that must be considered when performing an appendectomy. (With all surgeries there are risks that must be evaluated before performing the procedures.) The risks are different depending on the state of the appendix. If the appendix has not ruptured, the complication rate is only about 3% but if the appendix has ruptured, the complication rate rises to almost 59%. The most usual complications that can occur are pneumonia, hernia of the incision, thrombophlebitis, bleeding and adhesions. Evidence indicates that a delay in obtaining surgery after admission results in no measurable difference in outcomes to the person with appendicitis.
The surgeon will explain how long the recovery process should take. Abdomen hair is usually removed to avoid complications that may appear regarding the incision.
In most cases, patients going in for surgery experience nausea or vomiting that require medication before surgery. Antibiotics, along with pain medication, may be administered before appendectomies.
After surgery
Hospital lengths of stay typically range from a few hours to a few days but can be a few weeks if complications occur. The recovery process may vary depending on the severity of the condition: if the appendix had ruptured or not before surgery. Appendix surgery recovery is generally much faster if the appendix does not rupture. It is important that people undergoing surgery respect their doctor's advice and limit their physical activity so the tissues can heal. Recovery after an appendectomy may not require diet changes or a lifestyle change.
The length of hospital stays for appendicitis varies on the severity of the condition. A study from the United States found that in 2010, the average appendicitis hospital stay was 1.8 days. For stays where the person's appendix had ruptured, the average length of stay was 5.2 days.
After surgery, the patient will be transferred to a postanesthesia care unit, so their vital signs can be closely monitored to detect anesthesia- or surgery-related complications. Pain medication may be administered if necessary. After patients are completely awake, they are moved to a hospital room to recover. Most individuals will be offered clear liquids the day after the surgery, then progress to a regular diet when the intestines start to function correctly. Patients are recommended to sit on the edge of the bed and walk short distances several times a day. Moving is mandatory, and pain medication may be given if necessary. Full recovery from appendectomies takes about four to six weeks but can be prolonged to up to eight weeks if the appendix has ruptured.
Prognosis
Most people with appendicitis recover quickly after surgical treatment, but complications can occur if treatment is delayed or if peritonitis occurs. Recovery time depends on age, condition, complications, and other circumstances, including the amount of alcohol consumption, but usually is between 10 and 28 days. For young children (around ten years old), the recovery takes three weeks.
The possibility of peritonitis is the reason why acute appendicitis warrants rapid evaluation and treatment. People with suspected appendicitis may have to undergo a medical evacuation. Appendectomies have occasionally been performed in emergency conditions (i.e., not in a proper hospital) when a timely medical evacuation was impossible.
Typical acute appendicitis responds quickly to appendectomy and occasionally will resolve spontaneously. If appendicitis resolves spontaneously, it remains controversial whether an elective interval appendectomy should be performed to prevent a recurrent episode of appendicitis. Atypical appendicitis (associated with suppurative appendicitis) is more challenging to diagnose and is more apt to be complicated even when operated early. In either condition, prompt diagnosis and appendectomy yield the best results with full recovery in two to four weeks usually. Mortality and severe complications are unusual but do occur, especially if peritonitis persists and is untreated.
Another entity known as the appendicular lump is talked about. It happens when the appendix is not removed early during infection, and the omentum and intestine adhere to it, forming a palpable lump. During this period, surgery is risky unless there is pus formation evident by fever and toxicity or by ultrasound. Medical management treats the condition.
An unusual complication of an appendectomy is "stump appendicitis": inflammation occurs in the remnant appendiceal stump left after a prior incomplete appendectomy. Stump appendicitis can occur months to years after initial appendectomy and can be identified with imaging modalities such as ultrasound.
History
The history of appendicitis traces back to ancient medical texts, though its clear clinical understanding emerged in the 19th century. Berengario da Carpi provided the first recorded description of the appendix in the 16th century, followed by Andreas Vesalius and Gabriele Falloppio. Clinical understanding progressed in the 18th and 19th centuries, marked by Lorenz Heister's autopsy findings, Claudius Aymand's surgical intervention, and J. Mestivier's operation for appendicitis. Prior to the use of the term appendicitis, it was described with terms including perityphlitis, typhlitis, paratyphlitis, and extra-peritoneal abscess of the right iliac fossa. The term "appendicitis" was coined by the American physician Reginald Heber Fitz in 1886, leading to standardized diagnosis and treatment, including Charles McBurney's identification of McBurney's point. Modern appendectomy techniques evolved in the early 20th century, coinciding with advancements in pathology, notably demonstrated by Ludwig Aschoff in 1908.
Epidemiology
Appendicitis is most common between the ages of 5 and 40. In 2013, it resulted in 72,000 deaths globally, down from 88,000 in 1990.
In the United States, there were nearly 293,000 hospitalizations involving appendicitis in 2010. Appendicitis is one of the most frequent diagnoses for emergency department visits resulting in hospitalization among children ages 5–17 years in the United States.
Adults presenting to the emergency department with a known family history of appendicitis are more likely to have this disease than those without.
| Biology and health sciences | Specific diseases | Health |
71020 | https://en.wikipedia.org/wiki/Ultraviolet%E2%80%93visible%20spectroscopy | Ultraviolet–visible spectroscopy | Ultraviolet–visible spectrophotometry (UV–Vis or UV-VIS) refers to absorption spectroscopy or reflectance spectroscopy in part of the ultraviolet and the full, adjacent visible regions of the electromagnetic spectrum. Being relatively inexpensive and easily implemented, this methodology is widely used in diverse applied and fundamental applications. The only requirement is that the sample absorb in the UV-Vis region, i.e. be a chromophore. Absorption spectroscopy is complementary to fluorescence spectroscopy. Parameters of interest, besides the wavelength of measurement, are absorbance (A) or transmittance (%T) or reflectance (%R), and its change with time.
A UV-Vis spectrophotometer is an analytical instrument that measures the amount of ultraviolet (UV) and visible light that is absorbed by a sample. It is a widely used technique in chemistry, biochemistry, and other fields, to identify and quantify compounds in a variety of samples.
UV-Vis spectrophotometers work by passing a beam of light through the sample and measuring the amount of light that is absorbed at each wavelength. The amount of light absorbed is proportional to the concentration of the absorbing compound in the sample.
Optical transitions
Most molecules and ions absorb energy in the ultraviolet or visible range, i.e., they are chromophores. The absorbed photon excites an electron in the chromophore to higher energy molecular orbitals, giving rise to an excited state. For organic chromophores, four possible types of transitions are assumed: π–π*, n–π*, σ–σ*, and n–σ*. Transition metal complexes are often colored (i.e., absorb visible light) owing to the presence of multiple electronic states associated with incompletely filled d orbitals.
Applications
UV-Vis can be used to monitor structural changes in DNA.
UV-Vis spectroscopy is routinely used in analytical chemistry for the quantitative determination of diverse analytes or sample, such as transition metal ions, highly conjugated organic compounds, and biological macromolecules. Spectroscopic analysis is commonly carried out in solutions but solids and gases may also be studied.
Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water-soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases.
While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement.
The Beer–Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV-Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve.
A UV-Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor.
The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward–Fieser rules, for instance, are a set of empirical observations used to predict λmax, the wavelength of the most intense UV-Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV-Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present.
The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer–Lambert law:
,
where A is the measured absorbance (formally dimensionless but generally reported in absorbance units (AU)), is the intensity of the incident light at a given wavelength, is the transmitted intensity, L the path length through the sample, and c the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of .
The absorbance and extinction ε are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm.
The Beer–Lambert law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (xylenol orange or neutral red, for example).
UV–Vis spectroscopy is also used in the semiconductor industry to measure the thickness and optical properties of thin films on a wafer. UV–Vis spectrometers are used to measure the reflectance of light, and can be analyzed via the Forouhi–Bloomer dispersion equations to determine the index of refraction () and the extinction coefficient () of a given film across the measured spectral range.
Practical considerations
The Beer–Lambert law has implicit assumptions that must be met experimentally for it to apply; otherwise there is a possibility of deviations from the law. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid. Worldwide, pharmacopoeias such as the American (USP) and European (Ph. Eur.) pharmacopeias demand that spectrophotometers perform according to strict regulatory requirements encompassing factors such as stray light and wavelength accuracy.
Spectral bandwidth
Spectral bandwidth of a spectrophotometer is the range of wavelengths that the instrument transmits through a sample at a given time. It is determined by the light source, the monochromator, its physical slit-width and optical dispersion and the detector of the spectrophotometer. The spectral bandwidth affects the resolution and accuracy of the measurement. A narrower spectral bandwidth provides higher resolution and accuracy, but also requires more time and energy to scan the entire spectrum. A wider spectral bandwidth allows for faster and easier scanning, but may result in lower resolution and accuracy, especially for samples with overlapping absorption peaks. Therefore, choosing an appropriate spectral bandwidth is important for obtaining reliable and precise results.
It is important to have a monochromatic source of radiation for the light incident on the sample cell to enhance the linearity of the response. The closer the bandwidth is to be monochromatic (transmitting unit of wavelength) the more linear will be the response. The spectral bandwidth is measured as the number of wavelengths transmitted at half the maximum intensity of the light leaving the monochromator.
The best spectral bandwidth achievable is a specification of the UV spectrophotometer, and it characterizes how monochromatic the incident light can be. If this bandwidth is comparable to (or more than) the width of the absorption peak of the sample component, then the measured extinction coefficient will not be accurate. In reference measurements, the instrument bandwidth (bandwidth of the incident light) is kept below the width of the spectral peaks. When a test material is being measured, the bandwidth of the incident light should also be sufficiently narrow. Reducing the spectral bandwidth reduces the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio.
Wavelength error
The extinction coefficient of an analyte in solution changes gradually with wavelength. A peak (a wavelength where the absorbance reaches a maximum) in the absorbance curve vs wavelength, i.e. the UV-VIS spectrum, is where the rate of change of absorbance with wavelength is the lowest. Therefore, quantitative measurements of a solute are usually conducted, using a wavelength around the absorbance peak, to minimize inaccuracies produced by errors in wavelength, due to the change of extinction coefficient with wavelength.
Stray light
Stray light in a UV spectrophotometer is any light that reaches its detector that is not of the wavelength selected by the monochromator. This can be caused, for instance, by scattering of light within the instrument, or by reflections from optical surfaces.
Stray light can cause significant errors in absorbance measurements, especially at high absorbances, because the stray light will be added to the signal detected by the detector, even though it is not part of the actually selected wavelength. The result is that the measured and reported absorbance will be lower than the actual absorbance of the sample.
The stray light is an important factor, as it determines the purity of the light used for the analysis. The most important factor affecting it is the stray light level of the monochromator.
Typically a detector used in a UV-VIS spectrophotometer is broadband; it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear.
As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 Absorbance Units (AU), which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range.
Deviations from the Beer–Lambert law
At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer–Lambert law, varying concentration and path length has an equivalent effect—diluting a solution by a factor of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring.
Solutions that are not homogeneous can show deviations from the Beer–Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles. The deviations will be most noticeable under conditions of low concentration and high absorbance. The last reference describes a way to correct for this deviation.
Some solutions, like copper(II) chloride in water, change visually at a certain concentration because of changed conditions around the coloured ion (the divalent copper ion). For copper(II) chloride it means a shift from blue to green, which would mean that monochromatic measurements would deviate from the Beer–Lambert law.
Measurement uncertainty sources
The above factors contribute to the measurement uncertainty of the results obtained with UV-Vis spectrophotometry. If UV-Vis spectrophotometry is used in quantitative chemical analysis then the results are additionally affected by uncertainty sources arising from the nature of the compounds and/or solutions that are measured. These include spectral interferences caused by absorption band overlap, fading of the color of the absorbing species (caused by decomposition or reaction) and possible composition mismatch between the sample and the calibration solution.
Ultraviolet–visible spectrophotometer
The instrument used in ultraviolet–visible spectroscopy is called a UV-Vis spectrophotometer. It measures the intensity of light after passing through a sample (), and compares it to the intensity of light before it passes through the sample (). The ratio is called the transmittance, and is usually expressed as a percentage (%T). The absorbance, , is based on the transmittance:
The UV–visible spectrophotometer can also be configured to measure reflectance. In this case, the spectrophotometer measures the intensity of light reflected from a sample (), and compares it to the intensity of light reflected from a reference material () (such as a white tile). The ratio is called the reflectance, and is usually expressed as a percentage (%R).
The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or a prism as a monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a tungsten filament (300–2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190–400 nm), a xenon arc lamp, which is continuous from 160 to 2,000 nm; or more recently, light emitting diodes (LED) for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously.
A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. must be measured by removing the sample. This was the earliest design and is still in common use in both teaching and industrial labs.
In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case, the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken.
In a single-beam instrument, the cuvette containing only a solvent has to be measured first. Mettler Toledo developed a single beam array spectrophotometer that allows fast and accurate measurements over the UV-Vis range. The light source consists of a Xenon flash lamp for the ultraviolet (UV) as well as for the visible (VIS) and near-infrared wavelength regions covering a spectral range from 190 up to 1100 nm. The lamp flashes are focused on a glass fiber which drives the beam of light onto a cuvette containing the sample solution. The beam passes through the sample and specific wavelengths are absorbed by the sample components. The remaining light is collected after the cuvette by a glass fiber and driven into a spectrograph. The spectrograph consists of a diffraction grating that separates the light into the different wavelengths, and a CCD sensor to record the data, respectively. The whole spectrum is thus simultaneously measured, allowing for fast recording.
Samples for UV-Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, , in the Beer–Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths.
Specialized instruments have also been made. These include attaching spectrophotometers to telescopes to measure the spectra of astronomical features. UV–visible microspectrophotometers consist of a UV–visible microscope integrated with a UV–visible spectrophotometer.
A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. By removing the concentration dependence, the extinction coefficient (ε) can be determined as a function of wavelength.
Microspectrophotometry
UV–visible spectroscopy of microscopic samples is done by integrating an optical microscope with UV–visible optics, white light sources, a monochromator, and a sensitive detector such as a charge-coupled device (CCD) or photomultiplier tube (PMT). As only a single optical path is available, these are single beam instruments. Modern instruments are capable of measuring UV–visible spectra in both reflectance and transmission of micron-scale sampling areas. The advantages of using such instruments is that they are able to measure microscopic samples but are also able to measure the spectra of larger samples with high spatial resolution. As such, they are used in the forensic laboratory to analyze the dyes and pigments in individual textile fibers, microscopic paint chips and the color of glass fragments. They are also used in materials science and biological research and for determining the energy content of coal and petroleum source rock by measuring the vitrinite reflectance. Microspectrophotometers are used in the semiconductor and micro-optics industries for monitoring the thickness of thin films after they have been deposited. In the semiconductor industry, they are used because the critical dimensions of circuitry is microscopic. A typical test of a semiconductor wafer would entail the acquisition of spectra from many points on a patterned or unpatterned wafer. The thickness of the deposited films may be calculated from the interference pattern of the spectra. In addition, ultraviolet–visible spectrophotometry can be used to determine the thickness, along with the refractive index and extinction coefficient of thin films. A map of the film thickness across the entire wafer can then be generated and used for quality control purposes.
Additional applications
UV-Vis can be applied to characterize the rate of a chemical reaction. Illustrative is the conversion of the yellow-orange and blue isomers of mercury dithizonate. This method of analysis relies on the fact that concentration is linearly proportional to concentration. In the same approach allows determination of equilibria between chromophores.
From the spectrum of burning gases, it is possible to determine a chemical composition of a fuel, temperature of gases, and air-fuel ratio.
| Physical sciences | Spectroscopy | Chemistry |
71025 | https://en.wikipedia.org/wiki/Exotic%20Shorthair | Exotic Shorthair | The Exotic Shorthair is a breed of cat developed as a short-haired version of the Persian. The Exotic is similar to the Persian in appearance with the exception of the short dense coat.
History
In the late 1950s, the Persian was used as an outcross by some American Shorthair breeders. This was done in secret in order to improve their body type, and crosses were also made with the Russian Blue and the Burmese. The crossbreed look gained recognition in the show ring, but unhappy American Shorthair breeders successfully produced a new breed standard that would disqualify American Shorthairs that showed signs of crossbreeding. One American Shorthair breeder who saw the potential of the Persian/American Shorthair cross proposed and eventually got the Cat Fanciers' Association judge and American Shorthair breeder Jane Martinke to recognize them as a new breed in 1966, under the name Exotic Shorthair. In 1987, the Cat Fanciers' Association closed the Exotic to shorthair outcrosses, leaving Persian as the only allowable outcross breed.
Description
Appearance
The Exotic Shorthair is a medium to large sized breed just like the Persian. The head of the Exotic Shorthair is round and large. The ears are small with a well rounded tip that face low on the head. The cheeks are full and rounded. The eyes are large and round. The tail is short compared to the length of the body. Just like the British Shorthair and the Persian the Exotic Shorthair comes in all different colour variations.
Longhair Exotics
Because of the regular use of Persian as outcrosses, some Exotics may carry a copy of the recessive longhair gene. When two such cats mate, there is a 1 in 4 chance of each offspring being longhaired. Longhaired Exotics are not considered Persians by the Cat Fanciers' Association, although The International Cat Association accepts them as Persians. Other associations like the American Cat Fanciers Association register them as a separate Exotic Longhair breed.
Health
Like the Persian the Exotic Shorthair is a brachycephalic breed, meaning that it has problems as a result of having the nose and eyes in close proximity to each other, giving the appearance of a pushed-in face. Some conditions common in the Exotic Shorthair are listed below.
Brachycephalic airway obstructive syndrome. Also referred to as brachycephalic respiratory syndrome or congenital obstructive upper airway disease, this causes upper airway abnormalities ranging in severity. The syndrome can cause increased airway resistance, inflammation of structures in the airways, and increased strain on the heart. Treatment includes weight loss, surgery, and avoiding humid or hot conditions.
Corneal sequestrum. A necrosis of the cornea of unknown origin.
Dystocia. An abnormal labor due to large-domed skulls.
Feline polycystic kidney disease (PKD). Exotic Shorthairs, as well as Persians and other Persian-derived cats, have a high chance of inheriting PKD, a disease that can lead to kidney failure. Several studies using ultrasound scan screening have shown that the prevalence of PKD in Exotics is between 40 and 50% in developed nations. DNA screening for PKD is recommended for all Exotic cats used in breeding programs to reduce the incidence of kidney disease by spaying and neutering PKD positive cats.
In a review of over 5,000 cases of urate urolithiasis the Exotic Shorthair was significantly under-represented, with only one of the recorded cases belonging to an Exotic Shorthair.
Recognition
The Exotic has steadily gained popularity among cat fanciers with the help of the devoted advocates of the breed who saw the value in a Persian and Shorthair crossbreed.
In 1967, the Exotic Shorthair was first accepted for Championship status by the Cat Fanciers' Association.
In 1971, the first Exotic Shorthair achieved the status of Grand Champion.
In 1986, the Fédération Internationale Féline recognized the Exotic Shorthair.
In 1991, an Exotic was the Cat Fanciers' Association's Cat of the Year.
In 1992, the Cat Fanciers' Association's Best Kitten was an Exotic.
| Biology and health sciences | Cats | Animals |
71088 | https://en.wikipedia.org/wiki/Hibernation | Hibernation | Hibernation is a state of minimal activity and metabolic depression undergone by some animal species. Hibernation is a seasonal heterothermy characterized by low body-temperature, slow breathing and heart-rate, and low metabolic rate. It is most commonly used to pass through winter months – called overwintering.
Although traditionally reserved for "deep" hibernators such as rodents, the term has been redefined to include animals such as bears and is now applied based on active metabolic suppression rather than any absolute decline in body temperature. Many experts believe that the processes of daily torpor and hibernation form a continuum and utilise similar mechanisms. The equivalent during the summer months is aestivation.
Hibernation functions to conserve energy when sufficient food is not available. To achieve this energy saving, an endothermic animal decreases its metabolic rate and thereby its body temperature. Hibernation may last days, weeks, or months—depending on the species, ambient temperature, time of year, and the individual's body-condition. Before entering hibernation, animals need to store enough energy to last through the duration of their dormant period, possibly as long as an entire winter. Larger species become hyperphagic, eating a large amount of food and storing the energy in their bodies in the form of fat deposits. In many small species, food caching replaces eating and becoming fat.
Some species of mammals hibernate while gestating young, which are born either while the mother hibernates or shortly afterwards. For example, female black bears go into hibernation during the winter months in order to give birth to their offspring. The pregnant mothers significantly increase their body mass prior to hibernation, and this increase is further reflected in the weight of the offspring. The fat accumulation enables them to provide a sufficiently warm and nurturing environment for their newborns. During hibernation, they subsequently lose 15–27% of their pre-hibernation weight by using their stored fats for energy.
Ectothermic animals also undergo periods of metabolic suppression and dormancy, which in many invertebrates is referred to as diapause. Some researchers and members of the public use the term brumate to describe winter dormancy of reptiles, but the more general term hibernation is believed adequate to refer to any winter dormancy. Many insects, such as the wasp Polistes exclamans and the beetle Bolitotherus, exhibit periods of dormancy which have often been referred to as hibernation, despite their ectothermy. Botanists may use the term "seed hibernation" to refer to a form of seed dormancy.
Mammals
There is a variety of definitions for terms that describe hibernation in mammals, and different mammal clades hibernate differently. The following subsections discuss the terms obligate and facultative hibernation. The last two sections point out in particular primates, none of whom were thought to hibernate until recently, and bears, whose winter torpor had been contested as not being "true hibernation" during the late 20th century, since it is dissimilar from hibernation seen in rodents.
Obligate hibernation
Obligate hibernators are animals that spontaneously, and annually, enter hibernation regardless of ambient temperature and access to food. Obligate hibernators include many species of ground squirrels, other rodents, European hedgehogs and other insectivores, monotremes, and marsupials. These species undergo what has been traditionally called "hibernation": a physiological state wherein the body temperature drops to near ambient temperature, and heart and respiration rates slow drastically.
The typical winter season for obligate hibernators is characterized by periods of torpor interrupted by periodic, euthermic arousals, during which body temperatures and heart rates are restored to more typical levels. The cause and purpose of these arousals are still not clear; the question of why hibernators may return periodically to normal body temperatures has plagued researchers for decades, and while there is still no clear-cut explanation, there are multiple hypotheses on the topic. One favored hypothesis is that hibernators build a "sleep debt" during hibernation, and so must occasionally warm up to sleep. This has been supported by evidence in the Arctic ground squirrel. Other theories postulate that brief periods of high body temperature during hibernation allow the animal to restore its available energy sources or to initiate an immune response.
Hibernating Arctic ground squirrels may exhibit abdominal temperatures as low as , maintaining sub-zero abdominal temperatures for more than three weeks at a time, although the temperatures at the head and neck remain at or above.
Facultative hibernation
Facultative hibernators enter hibernation only when either cold-stressed, food-deprived, or both, unlike obligate hibernators, who enter hibernation based on seasonal timing cues rather than as a response to stressors from the environment.
A good example of the differences between these two types of hibernation can be seen in prairie dogs. The white-tailed prairie dog is an obligate hibernator, while the closely related black-tailed prairie dog is a facultative hibernator.
Primates
While hibernation has long been studied in rodents (namely ground squirrels), no primate or tropical mammal was known to hibernate until the discovery of hibernation in the fat-tailed dwarf lemur of Madagascar, which hibernates in tree holes for seven months of the year. Malagasy winter temperatures sometimes rise to over , so hibernation is not exclusively an adaptation to low ambient temperatures.
The hibernation of this lemur is strongly dependent on the thermal behaviour of its tree hole: If the hole is poorly insulated, the lemur's body temperature fluctuates widely, passively following the ambient temperature; if well insulated, the body temperature stays fairly constant and the animal undergoes regular spells of arousal. Dausmann found that hypometabolism in hibernating animals is not necessarily coupled with low body temperature.
Bears
Historically it was unclear whether or not bears truly hibernate, since they experience only a modest decline in body temperature (3–5 °C) compared with the much larger decreases (often 32 °C or more) seen in other hibernators. Many researchers thought that their deep sleep was not comparable with true, deep hibernation, but this theory was refuted by research in 2011 on captive black bears and again in 2016 in a study on brown bears.
Hibernating bears are able to recycle their proteins and urine, allowing them to stop urinating for months and to avoid muscle atrophy. They stay hydrated with the metabolic water that is produced in sufficient quantities to satisfy the water needs of the bear. They also do not eat or drink while hibernating, but live off their stored fat. Despite long-term inactivity and lack of food intake, hibernating bears are believed to maintain their bone mass and do not suffer from osteoporosis. They also increase the availability of certain essential amino acids in the muscle, as well as regulate the transcription of a suite of genes that limit muscle wasting.
A study by G. Edgar Folk, Jill M. Hunt and Mary A. Folk compared EKG of typical hibernators to three different bear species with respect to season, activity and dormancy, and found that the reduced relaxation (QT) interval of small hibernators was the same for the three bear species. They also found the QT interval changed for both typical hibernators and the bears from summer to winter. This 1977 study was one of the first evidences used to show that bears are hibernators.
In a 2016 study, wildlife veterinarian and associate professor at Inland Norway University of Applied Sciences, Alina L. Evans, researched 14 brown bears over three winters. Their movement, heart rate, heart rate variability, body temperature, physical activity, ambient temperature, and snow depth were measured to identify the drivers of the start and end of hibernation for bears. This study built the first chronology of both ecological and physiological events from before the start to the end of hibernation in the field. This research found that bears would enter their den when snow arrived and ambient temperature dropped to 0 °C. However, physical activity, heart rate, and body temperature started to drop slowly even several weeks before this. Once in their dens, the bears' heart rate variability dropped dramatically, indirectly suggesting metabolic suppression is related to their hibernation. Two months before the end of hibernation, the bears' body temperature starts to rise, unrelated to heart rate variability but rather driven by the ambient temperature. The heart rate variability only increases around three weeks before arousal and the bears only leave their den once outside temperatures are at their lower critical temperature. These findings suggest that bears are thermoconforming and bear hibernation is driven by environmental cues, but arousal is driven by physiological cues.
Birds
Ancient people believed that swallows hibernated, and ornithologist Gilbert White documented anecdotal evidence in his 1789 book The Natural History of Selborne that indicated the belief was still current in his time. It is now understood that the vast majority of bird species typically do not hibernate, instead utilizing shorter periods of torpor. One known exception is the common poorwill (Phalaenoptilus nuttallii), for which hibernation was first documented by Edmund Jaeger.
Dormancy and freezing in ectotherms
Because they cannot actively down-regulate their body temperature or metabolic rate, ectothermic animals (including fish, reptiles, and amphibians) cannot engage in obligate or facultative hibernation. They can experience decreased metabolic rates associated with colder environments or low oxygen availability (hypoxia) and exhibit dormancy (known as brumation). It was once thought that basking sharks settled to the floor of the North Sea and became dormant, but research by David Sims in 2003 dispelled this hypothesis, showing that the sharks traveled long distances throughout the seasons, tracking the areas with the highest quantity of plankton. Epaulette sharks have been documented to be able to survive for three hours without oxygen and at temperatures of up to as a means to survive in their shoreline habitat, where water and oxygen levels vary with the tide. Other animals able to survive long periods with very little or no oxygen include goldfish, red-eared sliders, wood frogs, and bar-headed geese. The ability to survive hypoxic or anoxic conditions is not closely related to endotherm hibernation.
Some animals can literally survive winter by freezing. For example, some fish, amphibians, and reptiles can naturally freeze and then "wake" up in the spring. These species have evolved freeze tolerance mechanism such as antifreeze proteins.
Hibernation induction trigger (HIT) protein and recombinant protein technology
Hibernation induction trigger (HIT) proteins isolated from mammals have been used in the study of organ recovery rates. One study in 1997 found that delta 2 opioid and hibernation induction trigger (HIT) proteins were not able to increase the recovery rate of heart tissue during ischemia. While unable to increase recovery rates at the time of ischemia, the protein precursors were identified to play a role in the preservation of veterinary organ function.
Recent advances in recombinant protein technology make it possible for scientists to manufacture hibernation induction trigger (HIT) proteins in the laboratory without the need for animal euthanasia. Bioengineering of proteins can aid in the protection of vulnerable populations of bears and other mammals that produce valuable proteins. Protein sequencing of HIT proteins, such as α 1-glycoprotein-like 88 kDa hibernation-related protein HRP, contributes to this research pool. A study in 2014 utilizes recombinant technology to construct, express, purify, and isolate animal proteins (HP-20, HP-25, and HP-27) outside of the animal to study key hibernation proteins (HP).
In humans
Researchers have studied how to induce hibernation in humans. The ability to hibernate would be useful for a number of reasons, such as saving the lives of seriously ill or injured people by temporarily putting them in a state of hibernation until treatment can be given. For space travel, human hibernation is also under consideration, such as for missions to Mars.
Anthropologists are also studying whether hibernation was possible in early hominid species.
Evolution of hibernation
In endothermic animals
As the ancestors of birds and mammals colonized land, leaving the relatively stable marine environments, more intense terrestrial seasons began playing a larger role in animals' lives. Some marine animals do go through periods of dormancy, but the effect is stronger and more widespread in terrestrial environments. As hibernation is a seasonal response, the movement of the ancestor of birds and mammals onto land introduced them to seasonal pressures that would eventually become hibernation. This is true for all clades of animals that undergo winter dormancy; the more prominent the seasons are, the longer the dormant period tends to be on average. Hibernation of endothermic animals has likely evolved multiple times, at least once in mammals—though it is debated whether or not it evolved more than once in mammals—and at least once in birds.
In both cases, hibernation likely evolved simultaneously with endothermy, with the earliest suggested instance of hibernation being in Thrinaxodon, an ancestor of mammals that lived roughly 252 million years ago. The evolution of endothermy allowed animals to have greater levels of activity and better incubation of embryos, among other benefits for animals in the Permian and Triassic periods. In order to conserve energy, the ancestors of birds and mammals would likely have experienced an early form of torpor or hibernation when they were not using their thermoregulatory abilities during the transition from ectothermy to endothermy. This is opposed to the previously dominant hypothesis that hibernation evolved after endothermy in response to the emergence of colder habitats. Body size also had an effect on the evolution of hibernation, as endotherms which grow large enough tend to lose their ability to be selectively heterothermic, with bears being one of very few exceptions. After torpor and hibernation diverged from a common proto-hibernating ancestor of birds and mammals, the ability to hibernate or go through torpor would have been lost in most larger mammals and birds. Hibernation would be less favored in larger animals because as animals increase in size, the surface area to volume ratio decreases, and it takes less energy to keep a high internal body temperature, and thus hibernation becomes unnecessary.
There is evidence that hibernation evolved separately in marsupials and placental mammals, though it is not settled. That evidence stems from development, where as soon as young marsupials from hibernating species are able to regulate their own heat, they have the capability to hibernate. In contrast, placental mammals that hibernate first develop homeothermy, only developing the ability to hibernate at a later point. This difference in development is evidence, though inconclusive, that they evolved by slightly different mechanisms and thus at different times.
Brumation in reptiles
As reptiles are ectothermic, having no system to deal with cold temperatures would be deadly in many environments. Reptilian winter dormancy, or brumation, likely evolved to help reptiles survive colder conditions. Reptiles that are dormant in the winter tend to have higher survival rates and slower aging. Reptiles evolved to exploit their ectothermy to deliberately cool their internal body temperatures. As opposed to mammals or birds, which will prepare for their hibernation but not directly cause it through their behavior, reptiles will trigger their own hibernation through their behavior. Reptiles seek out colder temperatures based on a periodic internal clock, which is likely triggered by cooler outside temperatures, as shown in the Texas horned lizard (Phrynosoma cornutum). One mechanism that reptiles use to survive hibernation, hypercapnic acidosis (the buildup of carbon dioxide in the blood), is also present in mammal hibernation. This is likely an example of convergent evolution. Hypercapnic acidosis evolved as a mechanism to slow metabolism and also interfere with oxygen transport so that oxygen is not used up and can still reach tissues in low oxygen periods of dormancy.
Diapause in arthropods
Seasonal diapause, or arthropod winter dormancy, seems to be plastic and quickly evolving, with large genetic variation and strong effects of natural selection present as well as having evolved many times across many clades of arthropods. As such, there is very little phylogenetic conservation in the genetic mechanism for diapause. Particularly the timing and extent of the seasonal diapause seem particularly variable, currently evolving as a response to climate change. As typical with hibernation, it evolved after the increased influence of seasonality as arthropods colonized terrestrial environments as a mechanism to keep energy costs low, particularly in harsher than normal environments, as well as being a good way to time the active or reproductive periods in arthropods. It is thought to have originally evolved in three stages. The first is development of neuroendocrine control over bodily functions, the second is pairing of that to environmental changes—in this case metabolic rates decreasing in response to colder temperatures—and the third is the pairing of these controls with reliable seasonal indicators within the arthropod, like biological timers. From these steps, arthropods developed a seasonal diapause, where many of their biological functions end up paired with a seasonal rhythm within the organism. This is a very similar mechanism to the evolution of insect migration, where instead of bodily functions like metabolism getting paired with seasonal indicators, movement patterns would be paired with seasonal indicators.
Winter dormancy in fish
While most animals that go through winter dormancy lower their metabolic rates, some fish, such as the cunner, do not. Instead, they do not actively depress their base metabolic rate, but instead they simply reduce their activity level. Fish that undergo winter dormancy in oxygenated water survive via inactivity paired with the colder temperature, which decreases energy consumption, but not the base metabolic rate that their bodies consume. But the Antarctic yellowbelly rockcod (Notothenia coriiceps), as well as fish that undergo winter dormancy in hypoxic conditions, do suppress their metabolism like other animals that are dormant in the winter. The mechanism for evolution of metabolic suppression in fish is unknown. Most fish that are dormant in the winters save enough energy by being still and so there is not a strong selective pressure to develop a metabolic suppression mechanism like that which is necessary in hypoxic conditions.
| Biology and health sciences | Ethology | null |
71100 | https://en.wikipedia.org/wiki/Red-eared%20slider | Red-eared slider | The red-eared slider or red-eared terrapin (Trachemys scripta elegans) is a subspecies of the pond slider (Trachemys scripta), a semiaquatic turtle belonging to the family Emydidae. It is the most popular pet turtle in the United States, is also popular as a pet across the rest of the world, and is the most invasive turtle. It is the most commonly traded turtle in the world.
The red-eared slider is native from the Midwestern United States to northern Mexico, but has become established in other places because of pet releases, and has become invasive in many areas where it outcompetes native species. The red-eared slider is included in the list of the world's 100 most invasive species.
Etymology
The red-eared slider gets its name from the small, red stripe around its ears, or where its ears would be, and from its ability to slide quickly off rocks and logs into the water. This species was previously known as Troost's turtle in honor of an American herpetologist Gerard Troost. Trachemys scripta troostii is now the scientific name for another subspecies, the Cumberland slider.
Taxonomy
The red-eared slider belongs to the order Testudines, which contains about 250 turtle species. It is a subspecies of Trachemys scripta. It was previously classified under the name Chrysemys scripta elegans. Trachemys scripta contains three subspecies: (red-eared slider), (yellow-bellied slider), and (Cumberland slider).
Description
The carapace of this species can reach more than in length, but the typical length ranges from . The females of the species are usually larger than the males. They typically live between 20–30 years, although some individuals can live for more than 40 years. Their life expectancy is shorter when they are kept in captivity. The quality of their living environment has a strong influence on their lifespans and well being.
The shell is divided into the upper or dorsal carapace, and the lower, ventral carapace or plastron. The upper carapace consists of the vertebral scutes, which form the central, elevated portion; pleural scutes that are located around the vertebral scutes; and then the marginal scutes around the edge of the carapace. The rear marginal scutes are notched. The scutes are bony keratinous elements. The carapace is oval and flattened (especially in the male) and has a weak keel that is more pronounced in the young. The color of the carapace changes depending on the age of the turtle. It usually has a dark green background with light and dark, highly variable markings. In young or recently hatched turtles, it is leaf green and gets slightly darker as a turtle gets older, until it is a very dark green, and then turns a shade between brown and olive green. The plastron is always a light yellow with dark, paired, irregular markings in the centre of most scutes. The plastron is highly variable in pattern. The head, legs, and tail are green with fine, irregular, yellow lines. The whole shell is covered in these stripes and markings that aid in camouflaging an individual.
These turtles also have a complete skeletal system, with partially webbed feet that help them to swim and that can be withdrawn inside the carapace along with the head and tail. The red stripe on each side of the head distinguishes the red-eared slider from all other North American species and gives this species its name, as the stripe is located behind the eyes, where their (external) ears would be. These stripes may lose their color over time. Color and vibrance of ear stripe can indicate immune health, with bright red having higher immune response than yellow stripes. Some individuals can also have a small mark of the same color on the top of their heads. The red-eared slider does not have a visible outer ear or an external auditory canal; instead, it relies on a middle ear entirely covered by a cartilaginous tympanic disc.
Like other turtles, the species is poikilotherm and thus dependent on the temperature of its environment. For this reason, it needs to sunbathe frequently to warm up and maintain body temperature.
Sexual dimorphism
Some dimorphism exists between males and females.
Red-eared slider young look practically identical regardless of their sex, making distinguishing them difficult. One useful method, however, is to inspect the markings under their carapace, which fade as the turtles age. Distinguishing the sex of adults is much easier, as the shells of mature males are smaller than those of females. Male red-eared sliders reach sexual maturity when their carapaces' diameters measure and females reach maturity when their carapaces measure about . Both males and females reach sexual maturity at 5–6 years old. Males are normally smaller than females, although this parameter is sometimes difficult to apply, as individuals being compared could be of different ages.
Males have longer claws on their front feet than the females; this helps them to hold onto a female during mating, and is used during courtship displays. The males' tails are thicker and longer. Typically, the cloacal opening of a female is at or under the rear edge of the carapace, while the male's opening occurs beyond the edge of the carapace. The male's plastron is slightly concave, while that of the female is completely flat. The male's concave plastron also helps to stabilize the male on the female's carapace during mating. Older males can sometimes have a dark greyish-olive green melanistic coloration, with very subdued markings. The red stripe on the sides of the head may be difficult to see or be absent. The female's appearance is substantially the same throughout her life.
Distribution and habitat
The red-eared slider originated from the area around the Mississippi River and the Gulf of Mexico, in warm climates in the Southeastern United States. Their native areas range from the southeast of Colorado to Virginia and Florida. In nature, they inhabit areas with a source of still, warm water, such as ponds, lakes, swamps, creeks, streams, or slow-flowing rivers.
They live in areas of calm water, where they are able to leave the water easily by climbing onto rocks or tree trunks so they can warm up in the sun. Individuals are often found sunbathing in a group or even on top of each other. They also require abundant aquatic plants, as these are the adults' main food, although they are omnivores. Turtles in the wild always remain close to water unless they are searching for a new habitat or when females leave the water to lay their eggs.
Invasive species
Invasive red-eared sliders cause negative impacts in the ecosystems they are introduced to because they have certain advantages over the native populations, such as a lower age at maturity, higher fecundity rates, and larger body size, which gives them a competitive advantage at basking and nesting sites, as well as when exploiting food resources. They also transmit diseases and displace the other turtle species with which they compete for food and breeding space.
Owing to their popularity as pets, red-eared sliders have been released or escaped into the wild in many parts of the world. This turtle is considered one of the world's worst invasive species. Feral populations are now found in Bermuda, Canada, Australia, Europe, Great Britain, South Africa, the Caribbean Islands, Israel, Bahrain, the Mariana Islands, Guam, Russia, and south- and far-east Asia. Within Great Britain, red-eared sliders have a wide distribution throughout England, Scotland, and Wales.
In Australia, it is illegal for members of the public to import, keep, trade, or release red-eared sliders, as they are regarded as an invasive species – see below. Their import has also been banned by the European Union as well as specific EU member countries. In 2015, Japan announced it was planning to ban the import of red-eared sliders, and officially established in June 2023. While this bans it and red swamp crayfish from importing, trading and releasing to wild, it is still able to keep it alive at home.
Behavior
Red-eared sliders are almost entirely aquatic, but as they are cold-blooded, they leave the water to sunbathe to regulate their temperature.
Hibernation
Red-eared sliders do not hibernate, but actually brumate; while they become less active, they do occasionally rise to the surface for food or air. Brumation can occur to varying degrees. In the wild, red-eared sliders brumate over the winter at the bottoms of ponds or shallow lakes. They generally become inactive in October, when temperatures fall below . During this time, the turtles enter a state of sopor, during which they do not eat or defecate, they remain nearly motionless, and the frequency of their breathing falls. Individuals usually brumate under water, but they have also been found under banks and rocks, and in hollow stumps. In warmer winter climates, they can become active and come to the surface for basking. When the temperature begins to drop again, however, they quickly return to a brumation state. Sliders generally come up for food in early March to as late as the end of April.
During brumation, T. s. elegans can survive anaerobically for weeks, producing ATP from glycolysis. The turtle's metabolic rate drops dramatically, with heart rate and cardiac output dropping by 80% to minimize energy requirements.
The lactic acid produced is buffered by minerals in the shell, preventing acidosis.
Red-eared sliders kept captive indoors should not brumate.
Reproduction
Courtship and mating activities for red-eared sliders usually occur between March and July, and take place under water. During courtship, the male swims around the female and flutters or vibrates the back side of his long claws on and around her face and head, possibly to direct pheromones towards her. The female swims toward the male, and if she is receptive, sinks to the bottom for mating. If the female is not receptive, she may become aggressive towards the male. Courtship can last 45 minutes, but mating takes only 10 minutes.
On occasion, a male may appear to be courting another male, and when kept in captivity may also show this behaviour towards other household pets. Between male turtles, it could be a sign of dominance and may preclude a fight. Young turtles may carry out the courtship dance before they reach sexual maturity at 5 years of age, but they are unable to mate.After mating, the female spends extra time basking to keep her eggs warm. She may also have a change of diet, eating only certain foods, or not eating as much as she normally would. A female can lay between two and 30 eggs depending on body size and other factors. One female can lay up to 5 clutches in the same year, and clutches are usually spaced 12–36 days apart. The time between mating and egg-laying can be days or weeks. The fertilization and laying can also be in conjunction, with eggs immediately laid based on location and nutrients available. The actual egg fertilization takes place during the egg-laying. This process also permits the laying of fertile eggs the following season, as the sperm can remain viable and available in the female's body in the absence of mating. During the last weeks of gestation, the female spends less time in the water and smells and scratches at the ground, indicating she is searching for a suitable place to lay her eggs. The female excavates a hole, using her hind legs, and lays her eggs in it.
Incubation takes 59–112 days. Late-season hatchlings may spend the winter in the nest and emerge when the weather warms in the spring. Just prior to hatching, the egg contains 50% turtle and 50% egg sac. A new hatchling breaks open its egg with its egg tooth, which falls out about an hour after hatching. This egg tooth never grows back. Hatchlings may stay inside their eggshells after hatching for the first day or two. If they are forced to leave the eggshell before they are ready, they will return if possible. When a hatchling decides to leave the shell, it still has a small sac protruding from its plastron. The yolk sac is vital and provides nourishment while visible, and several days later, it will have been absorbed into the turtle's belly. The sac must be absorbed, and does not fall off. The split must heal on its own before the turtle is able to swim. The time between the egg hatching and water entry is 21 days.
Damage to or inordinate motion of the protruding egg yolk – enough to allow air into the turtle's body – results in death. This is the main reason for marking the top of turtle eggs if their relocation is required for any reason. An egg turned upside down will eventually terminate the embryo's growth by the sac smothering the embryo. If it manages to reach term, the turtle will try to flip over with the yolk sac, which would allow air into the body cavity and cause death. The other fatal danger is water getting into the body cavity before the sac is absorbed completely, and while the opening has not completely healed yet.
The sex of red-eared sliders is determined by the incubation temperature during critical phases of the embryos' development. Only males are produced when eggs are incubated at temperatures of , whereas females develop at warmer temperatures. Colder temperatures result in the death of the embryos.
As pets, invasive species, and human infection risk
Red-eared slider turtles are the world's most commonly traded reptile, due to their relatively low price, and usually low food price, small size, and easy maintenance. As with other turtles, tortoises, and box turtles, individuals that survive their first year or two can be expected to live generally around 30 years. They present an infection risk; particularly of Salmonella.
Infection risks and United States federal regulations on commercial distribution
Reptiles are asymptomatic (meaning they suffer no adverse side effects) carriers of bacteria of the genus Salmonella. This has given rise to justifiable concerns given the many instances of infection of humans caused by the handling of turtles, which has led to restrictions in the sale of red-eared sliders in the United States. A 1975 U.S. Food and Drug Administration (FDA) regulation bans the sale (for general commercial and public use) of both turtle eggs and turtles with a carapace length less than . This regulation comes under the Public Health Service Act, and is enforced by the FDA in cooperation with state and local health jurisdictions. The ban was enacted because of the public-health impact of turtle-associated salmonellosis. Turtles and turtle eggs found to be offered for sale in violation of this provision are subject to destruction in accordance with FDA procedures. A fine up to $1,001 and / or imprisonment for up to one year is the penalty for those who refuse to comply with a valid final demand for destruction of such turtles or their eggs. Many stores and flea markets still sell small turtles due to an exception in the FDA regulation that allows turtles under to be sold "for bona fide scientific, educational, or exhibition purposes, other than use as pets." As with many other animals and inanimate objects, the risk of Salmonella exposure can be reduced by following basic rules of cleanliness. Small children must be taught to wash their hands immediately after they finish playing with the turtle, feeding it, or changing its water.
US state laws
Some states have other laws and regulations regarding possession of red-eared sliders because they can be an invasive species where they are not native and have been introduced through the pet trade. It is illegal in Florida to sell any wild-type red-eared slider, as they interbreed with the local yellow-bellied slider population, which is another subspecies of pond sliders, and hybrids typically combine the markings of the two subspecies. However, unusual color varieties such as albino and pastel red-eared sliders, which are derived from captive breeding, are still allowed for sale.
Invasive status in Australia
In Australia, breeding populations have been found in New South Wales and Queensland, and individual turtles have been found in the wild in Victoria, the Australian Capital Territory, and Western Australia.
Red-eared slider turtles are considered a significant threat to native turtle species; they mature more quickly, grow larger, produce more offspring, and are more aggressive. Numerous studies indicate that red-eared slider turtles can out-compete native turtles for food and nesting and basking sites. Unlike the general diet of pet red-eared sliders, wild red-eared sliders are usually omnivorous. Because red-eared slider turtles eat plants as well as animals, they could also have a negative impact on a range of native aquatic species, including rare frogs. Also, a significant risk exists that red-eared slider turtles can transfer diseases and parasites to native reptile species. A malaria-like parasite was spread to two wild turtle populations in Lane Cove River, Sydney.
Social and economic costs are also likely to be substantial. The Queensland government has invested close to AU$1 million in eradication programs to date. The turtle may also cause significant public-health costs due to the impacts of turtle-associated salmonella on human health. Outbreaks in multiple states and fatalities in children, associated with handling Salmonella-infected turtles, have been recorded in the US. Salmonella can also spread to humans when turtles contaminate drinking water.
The actions by state governments have varied considerably to date, ranging from ongoing eradication efforts by the Queensland government to very little action by the government of New South Wales. Experts have ranked the species as high priority for management in Australia, and are calling for a national prevention and eradication strategy, including a concerted education and compliance program to stop the illegal trade, possession, and release of slider turtles.
Invasive status in India
Red-eared slider turtles are threatening to invade the natural water bodies across northeast India, which are home to 21 out of 29 vulnerable native Indian species of freshwater turtle. Between August 2018 and June 2019, a team of herpetologists from the NGO "Help Earth" found red-eared sliders in the Deepor Beel wildlife sanctuary and Ugratara Devalaya temple pond. Further reports have been made from an unnamed stream, feeding into the Tlawng river, on a farm in the Mizoram capital, Aizawl.
In popular culture
Within the second volume of the Tales of the Teenage Mutant Ninja Turtles, the popular comic-book heroes were revealed as specimens of the red-eared slider. The popularity of the Ninja Turtles, coupled with the release of the first live-action film, led to a craze for keeping them as pets in United Kingdom, with subsequent ecological havoc, as turtles were accidentally or deliberately released into the wild.
| Biology and health sciences | Reptiles | null |
71138 | https://en.wikipedia.org/wiki/Wave%20function%20collapse | Wave function collapse | In quantum mechanics, wave function collapse, also called reduction of the state vector, occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation.
While standard quantum mechanics postulates wave function collapse to connect quantum to classical models, some extension theories propose physical processes that cause collapse. The in depth study of quantum decoherence has proposed that collapse is related to the interaction of a quantum system with its environment.
Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement.
Mathematical description
In quantum mechanics each measurable physical quantity of a quantum system is called an observable which, for example, could be the position and the momentum but also energy , components of spin (), and so on. The observable acts as a linear function on the states of the system; its eigenvectors correspond to the quantum state (i.e. eigenstate) and the eigenvalues to the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writing for an eigenstate and for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector using bra–ket notation:
The kets specify the different available quantum "alternatives", i.e., particular quantum states.
The wave function is a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true.
Collapse
To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation, abruptly converting an arbitrary state into a single component eigenstate of the observable:
where the arrow represents a measurement of the observable corresponding to the basis.
For any single event, only one eigenvalue is measured, chosen randomly from among the possible values.
Meaning of the expansion coefficients
The complex coefficients in the expansion of a quantum state in terms of eigenstates ,
can be written as an (complex) overlap of the corresponding eigenstate and the quantum state:
They are called the probability amplitudes. The square modulus is the probability that a measurement of the observable yields the eigenstate . The sum of the probability over all possible outcomes must be one:
As examples, individual counts in a double slit experiment with electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern. In a Stern-Gerlach experiment with silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area.
This statistical aspect of quantum measurements differs fundamentally from classical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information.
Terminology
The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. A quantum state is a mathematical description of a quantum system; a quantum state vector uses Hilbert space vectors for the description. Reduction of the state vector replaces the full state vector with a single eigenstate of the observable.
The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation". When the wave function representation is used, the "reduction" is called "wave function collapse".
The measurement problem
The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called the measurement problem of quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses the Born rule to compute the probable outcomes. Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrödinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics.
Physical approaches to collapse
Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself".
Various interpretations of quantum mechanics attempt to provide a physical model for collapse. Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories like de Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results from tests of Bell's theorem shows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes the many-worlds interpretation and consistent histories models. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example the objective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule.
The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as the Copenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.
Quantum decoherence
Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in the second law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them.
The form of decoherence known as environment-induced superselection proposes that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. The combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate.
History
The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik. Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process. Niels Bohr never mentions wave function collapse in his published work, but he repeatedly cautioned that we must give up a "pictorial representation". Despite the differences between Bohr and Heisenberg, their views are often grouped together as the "Copenhagen interpretation", of which wave function collapse is regarded as a key feature.
John von Neumann's influential 1932 work Mathematical Foundations of Quantum Mechanics took a more formal approach, developing an "ideal" measurement scheme that postulated that there were two processes of wave function change:
The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement (state reduction or collapse).
The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation.
In 1957 Hugh Everett III proposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe. While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of the Born rule.
Beginning in 1970 H. Dieter Zeh sought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work by Wojciech H. Zurek in 1980 lead eventually to a large number of papers on many aspects of the concept. Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system". Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics.
By explicitly dealing with the interaction of object and measuring instrument, von Neumann described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Von Neumann's projection postulate was conceived based on experimental evidence available during the 1930s, in particular Compton scattering. Later work refined the notion of measurements into the more easily discussed first kind, that will give the same value when immediately repeated, and the second kind that give different values when repeated.
| Physical sciences | Quantum mechanics | Physics |
71234 | https://en.wikipedia.org/wiki/Orange%20%28colour%29 | Orange (colour) | Orange is the colour between yellow and red on the spectrum of visible light. The human eyes perceive orange when observing light with a dominant wavelength between roughly 585 and 620 nanometres. In traditional colour theory, it is a secondary colour of pigments, produced by mixing yellow and red. In the RGB colour model, it is a tertiary colour. It is named after the fruit of the same name.
The orange colour of many fruits and vegetables, such as carrots, pumpkins, sweet potatoes, and oranges, comes from carotenes, a type of photosynthetic pigment. These pigments convert the light energy that the plants absorb from the Sun into chemical energy for the plants' growth. Similarly, the hues of autumn leaves are from the same pigment after chlorophyll is removed.
In Europe and the United States, surveys show that orange is the colour most associated with amusement, the unconventional, extroversion, warmth, fire, energy, activity, danger, taste and aroma, the autumn and Allhallowtide seasons, as well as having long been the national colour of the Netherlands and the House of Orange. It also serves as the political colour of the Christian democracy political ideology and most Christian democratic political parties. In Asia, it is an important symbolic colour in Buddhism and Hinduism.
In nature and culture
Etymology
In English, the colour orange is named after the appearance of the ripe orange fruit. The word comes from the , from the old term for the fruit, . The French word, in turn, comes from the Italian , based on Arabic (), borrowed from Persian (), derived from Sanskrit (), which in turn derives from a Dravidian root word (compare / / which refers to bitter orange in Tamil and Malayalam). The earliest known recorded use of orange as a colour name in English was in 1502, in a description of clothing purchased for Margaret Tudor. Another early recorded use was in 1512, in a will now filed with the Public Record Office. By the 17th century, the fruit and its colour were familiar enough that 'orange-coloured' shifted in use to 'orange' as an adjective. The place name "Orange" has a separate etymology and is not related to that of the colour.
Before this word was introduced to the English-speaking world, saffron already existed in the English language. Crog also referred to the saffron colour, so that orange was also referred to as (yellow-red) for reddish orange, or (yellow-saffron) for yellowish orange. Alternatively, orange things were sometimes described as red (which then had a broader meaning) such as red deer, red hair, the Red Planet and robin redbreast. When orange was infrequently used in heraldry, it was referred to as tawny or brusk.
History and art
In ancient Egypt, and ancient India, artists used an orange colour on some of their items. In Egypt, a mineral pigment called realgar was used for tomb paintings, as well as for other purposes. Orange carnelians were significantly used during the Indus Valley Civilisation which was, in turn, obtained by the people of Kutch, Gujarat, India. The colour was also used later by medieval artists for the colouring of manuscripts. Pigments were also made in ancient times from a mineral known as orpiment. Orpiment was an important item of trade in the Roman Empire and was used as a medicine in ancient China although it contains arsenic and is highly toxic. It was also used as a fly poison and to poison arrows. Because of its yellow-orange colour, it was also a favourite with alchemists who were searching for a way to make gold, both in China and in the West.
Before the late 15th century, the colour orange existed in Europe, but without the name; it was simply called yellow-red. Portuguese merchants brought the first orange trees to Europe from Asia in the late 15th and early 16th century, along with the Sanskrit word , which gradually became part of several European languages: in Spanish, in Portuguese, and orange in English & French. In mid-16th century England, the colour referred to as 'orange' was a reddish-brown, matching the deteriorated appearance of the fruit after a long journey from where it was grown in Portugal or |Spain Improvements in transportation and the introduction of an orange grove in Surrey allowed the fresh fruit to become more familiar in England, and the colour referred to as orange shifted in the 17th century toward its modern understanding.
House of Orange
The House of Orange-Nassau was one of the most influential royal houses in Europe in the 16th and 17th centuries. It originated in 1163 in the tiny Principality of Orange, a feudal state of north of Avignon in southern France. The Principality of Orange took its name not from the fruit, but from a Roman-Celtic settlement on the site which was founded in 36 or 35 BC and was named after the Celtic water god Arausio; however, the name may have been slightly altered, and the town associated with the colour, because it was on the route by which quantities of oranges were brought from southern ports such as Marseille to northern France.
The family of the Prince of Orange eventually adopted the name and the colour orange in the 1570s. The colour came to be associated with Protestantism, due to participation by the House of Orange on the Protestant side in the French Wars of Religion. One member of the house, William I of Orange, organised the Eighty Years' War comprising resistance against Spain, a war that lasted eighty years, until the Netherlands won its independence. The House's arguably most prominent member, William III of Orange, became King of England in 1689, after the downfall of the Catholic James II in the Glorious Revolution.
Due to William III, orange became an important political colour in Britain and Europe. William was a Protestant, and as such, he defended the Protestant minority of Ireland against the majority Roman Catholic population. As a result, the Protestants of Ireland were known as Orangemen. Orange eventually became one of the colours of the Irish flag, symbolising the Protestant heritage. His orange-white-and-blue rebel flag became the forerunner of The Netherlands' modern flag.
When the Dutch settlers living in the Cape Colony (now part of South Africa) migrated into the Southern African heartlands in the 19th century, they founded what they called the Orange Free State. In the United States, the flag of New York City has an orange stripe, to remember the Dutch colonists who founded the city. William of Orange is also remembered as the founder of the College of William & Mary, and Nassau County, New York is named after the House of Orange-Nassau.
18th and 19th century
In the 18th century, orange was sometimes used to depict the robes of Pomona, the goddess of fruitful abundance; her name came from the , the Latin word for fruit. Oranges themselves became more common in northern Europe, thanks to the 17th-century invention of the heated greenhouse, a building type which became known as an orangerie. The French artist Jean-Honoré Fragonard depicted an allegorical figure of inspiration dressed in orange.
In 1797 a French scientist Louis Vauquelin discovered the mineral crocoite, or lead chromate, which led in 1809 to the invention of the synthetic pigment chrome orange. Other synthetic pigments, cobalt red, cobalt yellow, and cobalt orange, the last made from cadmium sulfide plus cadmium selenide, soon followed. These new pigments, plus the invention of the metal paint tube in 1841, made it possible for artists to paint outdoors and to capture the colours of natural light.
In Britain, orange became highly popular with the Pre-Raphaelites and with history painters. The flowing red-orange hair of Elizabeth Siddal, a prolific model and the wife of painter Dante Gabriel Rossetti, became a symbol of the Pre-Raphaelite movement. Lord Leighton, the president of the Royal Academy, produced Flaming June, a painting of a sleeping young woman in a bright orange dress, which won wide acclaim. Albert Joseph Moore painted festive scenes of Romans wearing orange cloaks brighter than any of the Romans ever likely wore. In the United States, Winslow Homer brightened his palette with vivid oranges.
In France, painters took orange in an entirely different direction. In 1872 Claude Monet painted Impression, Sunrise, a tiny orange sun and some orange light reflected on the clouds and water in the centre of a hazy blue landscape. This painting gave its name to the Impressionist movement.
Orange became an important colour for all the Impressionist painters. They all had studied the recent books on colour theory, and they know that orange placed next to azure blue made both colours much brighter. Auguste Renoir painted boats with stripes of chrome orange paint straight from the tube. Paul Cézanne did not use orange pigment, but produced his own oranges with touches of yellow, red and ochre against a blue background. Toulouse-Lautrec often used oranges in the skirts of dancers and gowns of Parisiennes in the cafes and clubs he portrayed. For him, it was the colour of festivity and amusement.
The Post-Impressionists went even further with orange. Paul Gauguin used oranges as backgrounds, for clothing and skin colour, to fill his pictures with light and exoticism. But no other painter used orange so often and dramatically as Vincent van Gogh. who had shared a house with Gauguin in Arles for a time. For Van Gogh orange and yellow were the pure sunlight of Provence. He produced his own oranges with mixtures of yellow, ochre and red, and placed them next to slashes of sienna red and bottle green, and below a sky of turbulent blue and violet. He put an orange moon and stars in a cobalt blue sky. He wrote to his brother Theo of searching for oppositions of blue with orange, of red with green, of yellow with violet, searching for broken colours and neutral colours to harmonize the brutality of extremes, trying to make the colours intense, and not a harmony of greys.
20th and 21st centuries
In the 20th and 21st centuries, the colour orange had highly varied associations, both positive and negative.
The high visibility of orange made it a popular colour for certain kinds of clothing and equipment. During World War II, US Navy pilots in the Pacific began to wear orange inflatable life jackets, which could be spotted by search and rescue planes. After the war, these jackets became common on both civilian and naval vessels of all sizes, and on aircraft flown over water. Orange is also widely worn (to avoid being hit) by workers on highways and by cyclists.
A herbicide called Agent Orange was widely sprayed from aircraft by the Royal Air Force during the Malayan Emergency and the US Air Force during the Vietnam War to remove the forest and jungle cover beneath which enemy combatants were believed to be hiding, and to expose their supply routes. The chemical was not actually orange, but took its name from the colour of the steel drums in which it was stored. Agent Orange was toxic, and was later linked to birth defects and other health problems.
Orange also had and continues to have a political dimension. Orange serves as the colour of Christian democratic political ideology, which is based on Catholic social teaching and Neo-Calvinist theology; Christian democratic political parties came to prominence in Europe and the Americas after World War II.
In Ukraine in November–December 2004, it became the colour of the Orange Revolution, a popular movement which carried activist and reformer Viktor Yushchenko into the presidency. In parts of the world, especially Northern Ireland, the colour is associated with the Orange Order, a Protestant fraternal organisation and relatedly, Orangemen, marches and other social and political activities, with the colour orange being associated with Protestantism similar to the Netherlands.
Science
Optics
In optics, orange is the colour seen by the eye when looking at light with a wavelength between approximately 585–620nm. It has a hue of 30° in HSV colour space. Isaac Newton's Opticks distinguished between pure orange light and mixtures of red and yellow light by noting that mixtures could be separated using a prism.
In the traditional colour wheel used by painters, orange is the range of colours between red and yellow, and painters can obtain orange simply by mixing red and yellow in various proportions; however these colours are never as vivid as a pure orange pigment. In the RGB colour model (the system used to display colours on a television or computer screen), orange is generated by combining high intensity red light with a lower intensity green light, with the blue light turned off entirely. Orange is a tertiary colour which is numerically halfway between gamma-compressed red and yellow, as can be seen in the RGB colour wheel.
Regarding painting, blue is the complementary colour to orange. As many painters of the 19th century discovered, blue and orange reinforce each other. The painter Vincent van Gogh wrote to his brother Theo that in his paintings, he was trying to reveal "the oppositions of blue with orange, of red with green, of yellow with violet... trying to make the colours intense and not a harmony of grey". In another letter he wrote simply, "There is no orange without blue." Van Gogh, Pierre-Auguste Renoir and many other Impressionist and Post-Impressionist painters frequently placed orange against azure or cobalt blue, to make both colours appear brighter.
The actual complement of orange is azure – a colour that is one quarter of the way between blue and green on the colour spectrum. The actual complementary colour of true blue is yellow. Orange pigments are largely in the ochre or cadmium families, and absorb mostly greenish-blue light.
Pigments and dyes
Other orange pigments include:
Minium and massicot are bright yellow and orange pigments made since ancient times by heating lead oxide and its variants. Minium was used in the Byzantine Empire for making the red-orange colour on illuminated manuscripts, while massicot was used by ancient Egyptian scribes and in the Middle Ages. Both substances are toxic, and were replaced in the beginning of the 20th century by chrome orange and cadmium orange.
Cadmium orange is a synthetic pigment made from cadmium sulphide. It is a by-product of mining for zinc, but also occurs rarely in nature in the mineral greenockite. It is usually made by replacing some of the sulphur with selenium, which results in an expensive but deep and lasting colour. Selenium was discovered in 1817, but the pigment was not made commercially until 1910.
Quinacridone orange is a synthetic organic pigment first identified in 1896 and manufactured in 1935. It makes a vivid and solid orange.
Diketopyrrolopyrrole orange or DPP orange is a synthetic organic pigment first commercialised in 1986. It is sold under various commercial names, such as translucent orange. It makes an extremely bright and lasting orange, and is widely used to colour plastics and fibres, as well as in paints.
Orange natural objects
The orange colour of carrots, pumpkins, sweet potatoes, oranges, and many other fruits and vegetables comes from carotenes, a type of photosynthetic pigment. These pigments convert the light energy that the plants absorb from the sun into chemical energy for the plants' growth. The carotenes themselves take their name from the carrot. Autumn leaves also get their orange colour from carotenes. When the weather turns cold and production of green chlorophyll stops, the orange colour remains.
Before the 18th century, carrots from Asia were usually purple, while those in Europe were either white or red. Dutch farmers bred a variety that was orange; according to some sources, as a tribute to the stadtholder of Holland and Zeeland, William of Orange. The long orange Dutch carrot, first described in 1721, is the ancestor of the orange horn carrot, one of the most common types found in supermarkets today. It takes its name from the town of Hoorn, in the Netherlands.
Flowers
Orange is traditionally associated with the autumn season, with the harvest and autumn leaves. The flowers, like orange fruits and vegetables and autumn leaves, get their colour from the photosynthetic pigments called carotenes.
Animals
Foods
Orange is a very common colour of fruits, vegetables, spices, and other foods in many different cultures. As a result, orange is the colour most often associated in western culture with taste and aroma. Orange foods include peaches, apricots, mangoes, carrots, shrimp, salmon roe, and many other foods. Orange colour is provided by spices such as paprika, saffron and curry powder. In the United States, with Halloween on 31 October, and in North America with Thanksgiving in October (Canada) and November (US) orange is associated with the harvest colour, and also is the colour of the carved pumpkins, or jack-o-lanterns, used to celebrate the holiday.
Food colourings
People associate certain colours with certain flavours, and the colour of food can influence the perceived flavour in anything from candy to wine. Since orange is popularly associated with good flavour, many companies add orange food colouring to improve the appearance of their packaged foods. Orange pigments and dyes, synthetic or natural, are added to many orange sodas and juices, cheeses (particularly cheddar cheese, Gloucester cheese, and American cheese); snack foods, butter and margarine; breakfast cereals, ice cream, yoghurt, jam and candy. It is also often added to children's medicine, and to chicken feed to make the egg yolks more orange.
The United States Government and the European Union certify a small number of synthetic chemical colourings to be used in food. These are usually aromatic hydrocarbons, or azo dyes, made from petroleum. The most common ones are:
Allura red AC, also known as Red 40 and E129.
Sunset Yellow FCF, also known as Yellow 6 and E110.
Tartrazine, also known as Yellow 5 and E102. A dye used in soft drinks such as Mountain Dew, Kool-Aid, chewing gum, popcorn, breakfast cereals, cosmetics, shampoos, eyeshadow, blush, and lipstick.
Orange B is approved by the US Food and Drug Administration, but only for hot dog and sausage casings.
Citrus Red 2 is certified only to colour orange peels.
Because many consumers are worried about possible health consequences of synthetic dyes, some companies are beginning to use natural food colours. Since these food colours are natural, they do not require any certification from the Food and Drug Administration. The most popular natural food colours are:
Annatto, made from the seeds of the achiote tree. Annatto contains carotenoids, the same ingredient that gives carrots and other vegetables their orange colour. Annatto has been used to dye certain cheeses in Britain, particularly Gloucester cheese, since the 16th century. It is now commonly used to colour American cheese, snack foods, breakfast cereal, butter, and margarine. It is used as a body paint by native populations in Central and South America. In India, women often put it, under the name sindūra, on their hairline to indicate that they are married.
Turmeric is a common spice in the Indian subcontinent, Persia and the Mideast. It contains the pigments called curcuminoids, widely used as a dye for the robes of Buddhist monks. It is also often used in curry powders and to give flavour to mustard. It is now being used more frequently in Europe and the US to give an orange colour to canned beverages, ice cream, yogurt, popcorn and breakfast cereal. The food colour is usually listed as E100.
Paprika oleoresin contains natural carotenoids, and is made from chili peppers. It is used to colour cheese, orange juice, spice mixtures and packaged sauces. It is also fed to chickens to make their egg yolks more orange.
Culture, associations and symbolism
Confucianism
In Confucianism, the religion and philosophy of ancient China, orange was the colour of transformation. In China and India, the colour took its name not from the orange fruit, but from saffron, the finest and most expensive dye in Asia. According to Confucianism, existence was governed by the interaction of the male active principle, the yang, and the female passive principle, the yin. Yellow was the colour of perfection and nobility; red was the colour of happiness and power. Yellow and red were compared to light and fire, spirituality and sensuality, seemingly opposite but really complementary. Out of the interaction between the two came orange, the colour of transformation.
Hinduism and Buddhism
A wide variety of colours, ranging from a slightly orange yellow to a deep orange red, all simply called saffron, are closely associated with Hinduism and Buddhism, and are commonly worn by monks and holy men across Asia.
In Hinduism, the divinity Krishna is commonly portrayed dressed in yellow or yellow orange. Yellow and saffron are also the colours worn by sadhu, or wandering pious men in India.
In Buddhism, orange (or more precisely saffron) was the colour of illumination, the highest state of perfection. The saffron colours of robes to be worn by monks were defined by the Buddhist texts. The robe and its colour is a sign of renunciation of the outside world and commitment to the order. The candidate monk, with his master, first appears before the monks of the monastery in his own clothes, with his new robe under his arm and asks to enter the order. He then takes his vows, puts on the robes, and with his begging bowl, goes out to the world. Thereafter, he spends his mornings begging and his afternoons in contemplation and study, either in a forest, garden, or in the monastery.
According to Buddhist scriptures and commentaries, the robe dye is allowed to be obtained from six kinds of substances: roots and tubers, plants, bark, leaves, flowers and fruits. The robes should also be boiled in water for a long time to get the correctly sober colour. Saffron and ochre, usually made with dye from the curcuma longa plant or the heartwood of the jackfruit tree, are the most common colours. The so-called forest monks usually wear ochre robes and city monks saffron, though this is not an official rule.
The colour of robes also varies somewhat among the different vehicles (schools) of Buddhism, and by country, depending on their doctrines and the dyes available. The monks of the strict Vajrayana, or Tantric Buddhism, practised in Tibet, wear the most colourful robes of saffron and red. The monks of Mahayana Buddhism, practised mainly in Japan, China and Korea, wear lighter yellow or saffron, often with white or black. Monks of Theravada Buddhism, practised in Southeast Asia, usually wear ochre or saffron colour. Monks of the forest tradition in Thailand and other parts of Southeast Asia wear robes of a brownish ochre, dyed from the wood of the jackfruit tree.
Colour of amusement
In Europe and America orange and yellow are the colours most associated with amusement, frivolity and entertainment. In this regard, orange is the exact opposite of its complementary colour, blue, the colour of calm and reflection. Mythological paintings traditionally showed Bacchus (known in Greek mythology as Dionysus), the god of wine, ritual madness and ecstasy, dressed in orange. Clowns have long worn orange wigs. Toulouse-Lautrec used a palette of yellow, black and orange in his posters of Paris cafes and theatres, and Henri Matisse used an orange, yellow and red palette in his painting, the Joy of Living.
Colour of visibility and warning
Orange is the colour most easily seen in dim light or against the water, making it, particularly the shade known as safety orange, the colour of choice for life rafts, life jackets or buoys. Highway temporary signs about construction or detours in the United States are orange, because of its visibility and its association with danger.
It is worn by people wanting to be seen, including highway workers and lifeguards. Prisoners are also sometimes dressed in orange clothing to make them easier to see during an escape. Lifeguards on the beaches of Los Angeles County, both real and in television series, wear orange swimsuits to make them stand out. Orange astronaut suits have the highest visibility in space, or against blue sea. An aircraft's two types of "black box", or flight data recorder and cockpit voice recorder, are actually bright orange, so they can be found more easily. In some cars, connectors related to safety systems, such as the airbag, may be coloured orange.
The Golden Gate Bridge at the entrance of San Francisco Bay is painted international orange to make it more visible in the fog. Next to red, it is the colour most popular for extroverts, and as a symbol of activity.
Orange is sometimes used, like red and yellow, as a colour warning of possible danger or calling for caution. A skull against an orange background means a toxic substance or poison.
In the colour system devised by the US Department of Homeland Security to measure the threat of terrorist attack, an orange level is second only to a red level. The US Manual on Uniform Traffic Control Devices specifies orange for use in temporary and construction signage.
Academia
In the United States and Canada, orange regalia is associated with the field of engineering.
The Syracuse University, Princeton University, Occidental College, St. John's College, and University of Tennessee also uses orange as a main colour.
Selected flags
Geography
Orange is the national colour of the Netherlands. The royal family, the House of Orange-Nassau, derives its name in part from its former holding, the principality of Orange. (The title Prince of Orange is still used for the Dutch heir apparent.)
The Republic of the Orange Free State () was an independent Boer republic in southern Africa during the second half of the 19th century, and later a British colony and a province of the Union of South Africa. It is the historical precursor to the present-day Free State province. Extending between the Orange and Vaal river, its borders were determined by the United Kingdom in 1848 when the region was proclaimed as the Orange River Sovereignty, with a seat of a British Resident in Bloemfontein.
Oranjemund (German for 'Mouth of Oranje') is a town situated in the extreme southwest of Namibia, on the northern bank of the Orange River mouth.
Contemporary political and social movements
Because of its symbolic meaning as the orange colour of activity, orange is often used as the colour of political and social movements.
Christian democratic political ideology and political parties, which are based on Catholic social teaching and Neo-Calvinist theology.
The Orange Institution is a pro-British Protestant association based in Northern Ireland.
Orange was the rallying colour of the 2004–2005 Orange Revolution in Ukraine.
Orange was the colour used by the historical Liberal Party of the United Kingdom
On 14 September 2017 North America's United Synagogue of Conservative Judaism began to use orange as part of a regarding effort.
Orange was used as a rallying colour by Israelis (such as Jewish settlers) who opposed Israel's unilateral disengagement plan in the Gaza Strip and the West Bank in 2005.
Orange ribbons are used to promote awareness and prevention of self-injury.
Orange is also used in the ribbon of the Order of St. George in Russia.
Orange is the party colour of several Christian democratic political parties, as well as others:
Alliance for the Future of Austria (BZÖ)
American Solidarity Party (ASP), United States
Bharatiya Janata Party (BJP), India
Christian Democrats, Denmark
Christian Democratic and Flemish (CD&V), Belgium
Humanist Democratic Centre (CDH), Belgium
Christian Social Party (CSP), Belgium
Christian Democratic People's Party, Switzerland
Christian Democratic Union, Germany
Christian Social People's Party, Luxembourg
Citizens-Party of the Citizenry, Spain
Czech Social Democratic Party
Democratic Liberal Party, Romania
Democratic Movement, France
Free Patriotic Movement, Lebanon
Party Workers' Liberation Front 30th of May (Frente Obrero), Curaçao
Fidesz – Hungarian Civic Union
Independence Party of Minnesota
Justice and Truth Alliance, Romania
Move Forward Party, Thailand
Nacionalista Party, Philippines
National Union, Israel
New Democratic Party, Canada
The NDP's unexpected sweep of seats in Quebec and its consequent rise to official opposition in the 2011 federal election became known as the "Orange Wave" () or "Orange Crush".
Orange Democratic Movement, Kenya
Orange Movement, Italy
Our Ukraine–People's Self-Defense Bloc
Palikot's Movement, Poland
People's National Party, Jamaica
People First Party, Republic of China (Taiwan)
PORA, Ukraine
Pwersa ng Masang Pilipino, Philippines
Québec solidaire, Canada
Reformed Political Party, Netherlands
Republican Left of Catalonia, Spain
Shiv Sena, India
Social Democratic Party, Portugal
United Nationalist Alliance, Philippines
Valencian Nationalist Bloc-Coalició Compromís, Spain
Zares, Slovenia
Religion
Orange, or more specifically deep saffron, is the most sacred colour of Hinduism.
Hindu and Sikh flags atop mandirs and gurdwaras, respectively, are typically a saffron-coloured pennant.
Saffron robes are often worn by Hindu swamis and also by Buddhist monks in the Theravada tradition.
In Paganism, orange represents energy, attraction, vitality, and stimulation. It can help with adapting, encouragement, and power.
Metaphysics and occultism
The "New Age Prophetess", Alice Bailey, in her system called the Seven Rays which classifies humans into seven different metaphysical psychological types, the "fifth ray" of "Concrete Science" is represented by the colour orange. People who have this metaphysical psychological type are said to be "on the Orange Ray".
Orange is used to symbolically represent the second (Swadhisthana) chakra.
In alchemy, orpiment – a contraction of the Latin word for gold (aurum) and colour (pigmentum) – was believed to be a key ingredient in the creation of the Philosopher's Stone.
Military
In the United States Army, orange has traditionally been associated with the dragoons, the mounted infantry units which eventually became the US Cavalry. The 1st Cavalry Regiment was founded in 1833 as the United States Dragoons. The modern coat of arms of the 1st Cavalry features the colour orange and orange-yellow shade called dragoon yellow, the colours of the early US dragoon regiments.
The US Signal Corps, founded at the beginning of the American Civil War, adopted orange and white as its official colours in 1872. Orange was adopted because it was the colour of a signal fire, historically used at night while smoke was used during the day, to communicate with distant army units.
Prior to and during the Napoleonic Wars a pale shade of orange known as aurore ("dawn") was adopted as the facing colour of several cavalry regiments in the French army. The colour resembled that of the early rising sun.
In the Royal Netherlands Air Force, aircraft may have a roundel with an orange dot in the middle, surrounded by three circular sectors in red, white, and blue.
In the Indonesian Air Force, the Air force infantry and special forces corps known as Paskhas uses Orange as their beret colour.
Sports
| Physical sciences | Color terms | null |
71268 | https://en.wikipedia.org/wiki/Astrochemistry | Astrochemistry | Astrochemistry is the study of the abundance and reactions of molecules in the universe, and their interaction with radiation. The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form.
History
As an offshoot of the disciplines of astronomy and chemistry, the history of astrochemistry is founded upon the shared history of the two fields. The development of advanced observational and experimental spectroscopy has allowed for the detection of an ever-increasing array of molecules within solar systems and the surrounding interstellar medium. In turn, the increasing number of chemicals discovered by advancements in spectroscopy and other technologies have increased the size and scale of the chemical space available for astrochemical study.
History of spectroscopy
Observations of solar spectra as performed by Athanasius Kircher (1646), Jan Marek Marci (1648), Robert Boyle (1664), and Francesco Maria Grimaldi (1665) all predated Newton's 1666 work which established the spectral nature of light and resulted in the first spectroscope. Spectroscopy was first used as an astronomical technique in 1802 with the experiments of William Hyde Wollaston, who built a spectrometer to observe the spectral lines present within solar radiation. These spectral lines were later quantified through the work of Joseph von Fraunhofer.
Spectroscopy was first used to distinguish between different materials after the release of Charles Wheatstone's 1835 report that the sparks given off by different metals have distinct emission spectra. This observation was later built upon by Léon Foucault, who demonstrated in 1849 that identical absorption and emission lines result from the same material at different temperatures. An equivalent statement was independently postulated by Anders Jonas Ångström in his 1853 work Optiska Undersökningar, where it was theorized that luminous gases emit rays of light at the same frequencies as light which they may absorb.
This spectroscopic data began to take upon theoretical importance with Johann Balmer's observation that the spectral lines exhibited by samples of hydrogen followed a simple empirical relationship which came to be known as the Balmer Series. This series, a special case of the more general Rydberg Formula developed by Johannes Rydberg in 1888, was created to describe the spectral lines observed for hydrogen. Rydberg's work expanded upon this formula by allowing for the calculation of spectral lines for multiple different chemical elements. The theoretical importance granted to these spectroscopic results was greatly expanded upon the development of quantum mechanics, as the theory allowed for these results to be compared to atomic and molecular emission spectra which had been calculated a priori.
History of astrochemistry
While radio astronomy was developed in the 1930s, it was not until 1937 that any substantial evidence arose for the conclusive identification of an interstellar molecule – up until this point, the only chemical species known to exist in interstellar space were atomic. These findings were confirmed in 1940, when McKellar et al. identified and attributed spectroscopic lines in an as-of-then unidentified radio observation to CH and CN molecules in interstellar space. In the thirty years afterwards, a small selection of other molecules were discovered in interstellar space: the most important being OH, discovered in 1963 and significant as a source of interstellar oxygen, and H2CO (formaldehyde), discovered in 1969 and significant for being the first observed organic, polyatomic molecule in interstellar space
The discovery of interstellar formaldehyde – and later, other molecules with potential biological significance, such as water or carbon monoxide – is seen by some as strong supporting evidence for abiogenetic theories of life: specifically, theories which hold that the basic molecular components of life came from extraterrestrial sources. This has prompted a still ongoing search for interstellar molecules which are either of direct biological importance – such as interstellar glycine, discovered in a comet within our solar system in 2009 – or which exhibit biologically relevant properties like chirality – an example of which (propylene oxide) was discovered in 2016 – alongside more basic astrochemical research.
Spectroscopy
One particularly important experimental tool in astrochemistry is spectroscopy through the use of telescopes to measure the absorption and emission of light from molecules and atoms in various environments. By comparing astronomical observations with laboratory measurements, astrochemists can infer the elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. This is possible because ions, atoms, and molecules have characteristic spectra: that is, the absorption and emission of certain wavelengths (colors) of light, often not visible to the human eye. However, these measurements have limitations, with various types of radiation (radio, infrared, visible, ultraviolet etc.) able to detect only certain types of species, depending on the chemical properties of the molecules. Interstellar formaldehyde was the first organic molecule detected in the interstellar medium.
Perhaps the most powerful technique for detection of individual chemical species is radio astronomy, which has resulted in the detection of over a hundred interstellar species, including radicals and ions, and organic (i.e. carbon-based) compounds, such as alcohols, acids, aldehydes, and ketones. One of the most abundant interstellar molecules, and among the easiest to detect with radio waves (due to its strong electric dipole moment), is CO (carbon monoxide). In fact, CO is such a common interstellar molecule that it is used to map out molecular regions. The radio observation of perhaps greatest human interest is the claim of interstellar glycine, the simplest amino acid, but with considerable accompanying controversy. One of the reasons why this detection was controversial is that although radio (and some other methods like rotational spectroscopy) are good for the identification of simple species with large dipole moments, they are less sensitive to more complex molecules, even something relatively small like amino acids.
Moreover, such methods are completely blind to molecules that have no dipole. For example, by far the most common molecule in the universe is H2 (hydrogen gas, or chemically better said dihydrogen), but it does not have a dipole moment, so it is invisible to radio telescopes. Moreover, such methods cannot detect species that are not in the gas-phase. Since dense molecular clouds are very cold (), most molecules in them (other than dihydrogen) are frozen, i.e. solid. Instead, dihydrogen and these other molecules are detected using other wavelengths of light. Dihydrogen is easily detected in the ultraviolet (UV) and visible ranges from its absorption and emission of light (the hydrogen line). Moreover, most organic compounds absorb and emit light in the infrared (IR) so, for example, the detection of methane in the atmosphere of Mars was achieved using an IR ground-based telescope, NASA's 3-meter Infrared Telescope Facility atop Mauna Kea, Hawaii. NASA's researchers use airborne IR telescope SOFIA and space telescope Spitzer for their observations, researches and scientific operations. Somewhat related to the recent detection of methane in the atmosphere of Mars. Christopher Oze, of the University of Canterbury in New Zealand and his colleagues reported, in June 2012, that measuring the ratio of dihydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active." Other scientists have recently reported methods of detecting dihydrogen and methane in extraterrestrial atmospheres.
Infrared astronomy has also revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called polyaromatic hydrocarbons, often abbreviated PAHs or PACs. These molecules, composed primarily of fused rings of carbon (either neutral or in an ionized state), are said to be the most common class of carbon compound in the Galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium and isotopes of carbon, nitrogen, and oxygen that are very rare on Earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying, carbon-rich red giant stars).
Infrared astronomy has also been used to assess the composition of solid materials in the interstellar medium, including silicates, kerogen-like carbon-rich solids, and ices. This is because unlike visible light, which is scattered or absorbed by solid particles, the IR radiation can pass through the microscopic interstellar particles, but in the process there are absorptions at certain wavelengths that are characteristic of the composition of the grains. As above with radio astronomy, there are certain limitations, e.g. N2 is difficult to detect by either IR or radio astronomy.
Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur. Since dihydrogen is by far the most abundant molecule in the universe, the initial chemistry of these ices is determined by the chemistry of the hydrogen. If the hydrogen is atomic, then the H atoms react with available O, C and N atoms, producing "reduced" species like H2O, CH4, and NH3. However, if the hydrogen is molecular and thus not reactive, this permits the heavier atoms to react or remain bonded together, producing CO, CO2, CN, etc. These mixed-molecular ices are exposed to ultraviolet radiation and cosmic rays, which results in complex radiation-driven chemistry. Lab experiments on the photochemistry of simple interstellar ices have produced amino acids. The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.
Research
Research is progressing on the way in which interstellar and circumstellar molecules form and interact, e.g. by including non-trivial quantum mechanical phenomena for synthesis pathways on interstellar particles. This research could have a profound impact on our understanding of the suite of molecules that were present in the molecular cloud when our solar system formed, which contributed to the rich carbon chemistry of comets and asteroids and hence the meteorites and interstellar dust particles which fall to the Earth by the ton every day.
The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on Earth can be highly
abundant in space, for example the H3+ ion.
Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, as well as the structure of stellar interiors. If a star develops a largely convective envelope, dredge-up events can occur, bringing the products of nuclear burning to the surface. If the star is experiencing significant mass loss, the expelled material may contain molecules whose rotational and vibrational spectral transitions can be observed with radio and infrared telescopes. An interesting example of this is the set of carbon stars with silicate and water-ice outer envelopes. Molecular spectroscopy allows us to see these stars transitioning from an original composition in which oxygen was more abundant than carbon, to a carbon star phase where the carbon produced by helium burning is brought to the surface by deep convection, and dramatically changes the molecular content of the stellar wind.
In October 2011, scientists reported that cosmic dust contains organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars.
On August 29, 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.
In September, 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics – "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."
In February 2014, NASA announced the creation of an improved spectral database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
On August 11, 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).
For the study of the recourses of chemical elements and molecules in the universe is developed the mathematical model of the molecules composition distribution in the interstellar environment on thermodynamic potentials by professor M.Yu. Dolomatov using methods of the probability theory, the mathematical and physical statistics and the equilibrium thermodynamics. Based on this model are estimated the resources of life-related molecules, amino acids and the nitrogenous bases in the interstellar medium. The possibility of the oil hydrocarbons molecules formation is shown. The given calculations confirm Sokolov's and Hoyl's hypotheses about the possibility of the oil hydrocarbons formation in Space. Results are confirmed by data of astrophysical supervision and space researches.
In July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67/P surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds, four of which were seen for the first time on a comet, including acetamide, acetone, methyl isocyanate and propionaldehyde.
In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life."
| Physical sciences | Astronomy basics | Astronomy |
71422 | https://en.wikipedia.org/wiki/Lymphoma | Lymphoma | Lymphoma is a group of blood and lymph tumors that develop from lymphocytes (a type of white blood cell). The name typically refers to just the cancerous versions rather than all such tumours. Signs and symptoms may include enlarged lymph nodes, fever, drenching sweats, unintended weight loss, itching, and constantly feeling tired. The enlarged lymph nodes are usually painless. The sweats are most common at night.
Many subtypes of lymphomas are known. The two main categories of lymphomas are the non-Hodgkin lymphoma (NHL) (90% of cases) and Hodgkin lymphoma (HL) (10%). Lymphomas, leukemias and myelomas are a part of the broader group of tumors of the hematopoietic and lymphoid tissues.
Risk factors for Hodgkin lymphoma include infection with Epstein–Barr virus and a history of the disease in the family. Risk factors for common types of non-Hodgkin lymphomas include autoimmune diseases, HIV/AIDS, infection with human T-lymphotropic virus, immunosuppressant medications, and some pesticides. Eating large amounts of red meat and tobacco smoking may also increase the risk. Diagnosis, if enlarged lymph nodes are present, is usually by lymph node biopsy. Blood, urine, and bone marrow testing may also be useful in the diagnosis. Medical imaging may then be done to determine if and where the cancer has spread. Lymphoma most often spreads to the lungs, liver, and brain.
Treatment may involve one or more of the following: chemotherapy, radiation therapy, proton therapy, targeted therapy, and surgery. In some non-Hodgkin lymphomas, an increased amount of protein produced by the lymphoma cells causes the blood to become so thick that plasmapheresis is performed to remove the protein. Watchful waiting may be appropriate for certain types. The outcome depends on the subtype, with some being curable and treatment prolonging survival in most. The five-year survival rate in the United States for all Hodgkin lymphoma subtypes is 85%, while that for non-Hodgkin lymphomas is 69%. Worldwide, lymphomas developed in 566,000 people in 2012 and caused 305,000 deaths. They make up 3–4% of all cancers, making them as a group the seventh-most-common form. In children, they are the third-most-common cancer. They occur more often in the developed world than in the developing world.
Signs and symptoms
Lymphoma may present with certain nonspecific symptoms; if the symptoms are persistent, an evaluation to determine their cause, including possible lymphoma, should be undertaken.
Lymphadenopathy or swelling of lymph nodes, is the primary presentation in lymphoma. It is generally painless.
B symptoms (systemic symptoms) – can be associated with both Hodgkin lymphoma and non-Hodgkin lymphoma. They consist of:
Fever
Night sweats
Weight loss
Other symptoms:
Anemia, bleeding, increased susceptibility to infections
Loss of appetite or anorexia
Fatigue
Respiratory distress or dyspnea
Itching
Diagnosis
Lymphoma is definitively diagnosed by a lymph-node biopsy, meaning a partial or total excision of a lymph node examined under the microscope. This examination reveals histopathological features that may indicate lymphoma. After lymphoma is diagnosed, a variety of tests may be carried out to look for specific features characteristic of different types of lymphoma. These include:
Immunophenotyping
Flow cytometry
Fluorescence in situ hybridization testing
Classification
According to the World Health Organization (WHO), lymphoma classification should reflect in which lymphocyte population the neoplasm arises. Thus, neoplasms that arise from precursor lymphoid cells are distinguished from those that arise from mature lymphoid cells. Most mature lymphoid neoplasms comprise the non-Hodgkin lymphomas. Historically, mature histiocytic and dendritic cell (HDC) neoplasms have been considered mature lymphoid neoplasms, since these often involve lymphoid tissue.
Lymphoma can also spread to the central nervous system, often around the brain in the meninges, known as lymphomatous meningitis (LM).
Hodgkin lymphoma
Hodgkin lymphoma accounts for about 15% of lymphomas. It differs from other forms of lymphomas in its prognosis and several pathological characteristics. A division into Hodgkin and non-Hodgkin lymphomas is used in several of the older classification systems. A Hodgkin lymphoma is marked by the presence of a type of cell called the Reed–Sternberg cell.
Non-Hodgkin lymphomas
Non-Hodgkin lymphomas, which are defined as being all lymphomas except Hodgkin lymphoma, are more common than Hodgkin lymphoma. A wide variety of lymphomas are in this class, and the causes, the types of cells involved, and the prognoses vary by type. The number of cases per year of non-Hodgkin lymphoma increases with age. It is further divided into several subtypes.
Epstein–Barr virus-associated lymphoproliferative diseases
Epstein–Barr virus-associated lymphoproliferative diseases are a group of benign, premalignant, and malignant diseases of lymphoid cells (i.e., B cells, T cells, NK cells, and histiocytic-dendritic cells) in which one or more of these cell types is infected with the Epstein–Barr virus (EBV). The virus may be responsible for the development and/or progression of these diseases. In addition to EBV-positive Hodgkin lymphomas, the World Health Organization (2016) includes the following lymphomas, when associated with EBV infection, in this group of diseases: Burkitt lymphoma; large B cell lymphoma, not otherwise specified; diffuse large B cell lymphoma associated with chronic inflammation; fibrin-associated diffuse large B cell lymphoma; primary effusion lymphoma; plasmablastic lymphoma; extranodal NK/T cell lymphoma, nasal type; peripheral T cell lymphoma, not otherwise specified; angioimmunoblastic T-cell lymphoma; follicular T cell lymphoma; and systemic T cell lymphoma of childhood.
WHO classification
The WHO classification, published in 2001 and updated in 2008, 2017, and 2022, is based upon the foundations laid within the "revised European–American lymphoma classification" (REAL). This system groups lymphomas by cell type (i.e., the normal cell type that most resembles the tumor) and defining phenotypic, molecular, or cytogenetic characteristics. The five groups are shown in the table. Hodgkin lymphoma is considered separately within the WHO and preceding classifications, although it is recognized as being a tumor, albeit markedly abnormal, of lymphocytes of mature B cell lineage.
Of the many forms of lymphoma, some are categorized as indolent (e.g. small lymphocytic lymphoma), compatible with a long life even without treatment, whereas other forms are aggressive (e.g. Burkitt's lymphoma), causing rapid deterioration and death. However, most of the aggressive lymphomas respond well to treatment and are curable. The prognosis, therefore, depends on the correct diagnosis and classification of the disease, which is established after examination of a biopsy by a pathologist (usually a hematopathologist).
Lymphoma subtypes (WHO 2008)
B-cell chronic lymphocytic leukemia/small cell lymphoma
3–4% of lymphomas in adults
Small resting lymphocytes mixed with variable numbers of large activated cells, lymph nodes diffusely effaced
CD5, surface immunoglobulin
5-year survival rate 50%.
Occurs in older adults, usually involves lymph nodes, bone marrow and spleen, most patients have peripheral blood involvement, indolent
B-cell prolymphocytic leukemia
Lymphoplasmacytic lymphoma (such as Waldenström macroglobulinemia)
Splenic marginal zone lymphoma
Hairy cell leukemia
Plasma cell neoplasms:
Plasma cell myeloma (also known as multiple myeloma)
Plasmacytoma
Monoclonal immunoglobulin deposition diseases
Heavy chain diseases
Extranodal marginal zone B cell lymphoma, also called MALT lymphoma
About 5% of lymphomas in adults
Variable cell size and differentiation, 40% show plasma cell differentiation, homing of B cells to epithelium creates lymphoepithelial lesions.
CD5, CD10, surface Ig
Frequently occurs outside lymph nodes, very indolent, may be cured by local excision
Nodal marginal zone B cell lymphoma
Follicular lymphoma
About 40% of lymphomas in adults
Small "cleaved" [cleft] cells (centrocytes) mixed with large activated cells (centroblasts), usually nodular ("follicular") growth pattern
CD10, surface Ig
About 72–77%
Occurs in older adults, usually involves lymph nodes, bone marrow and spleen, associated with t(14;18) translocation overexpressing Bcl-2, indolent
Primary cutaneous follicle center lymphoma
Mantle cell lymphoma
About 3–4% of lymphomas in adults
Lymphocytes of small to intermediate size growing in diffuse pattern
CD5
About 50 to 70%
Occurs mainly in adult males, usually involves lymph nodes, bone marrow, spleen and GI tract, associated with t(11;14) translocation overexpressing cyclin D1, moderately aggressive
Diffuse large B-cell lymphoma, not otherwise specified
About 40–50% of lymphomas in adults
Variable, most resemble B cells of large germinal centers, diffuse growth pattern
Variable expression of CD10 and surface Ig
Five-year survival rate 60%
Occurs in all ages, but most commonly in older adults, may occur outside lymph nodes, aggressive
Diffuse large B-cell lymphoma associated with chronic inflammation
Epstein–Barr virus positive diffuse large B-cell lymphoma, not otherwise specified
Lymphomatoid granulomatosis
Primary mediastinal (thymic) large B-cell lymphoma
Intravascular large B-cell lymphoma
ALK+ large B-cell lymphoma
Plasmablastic lymphoma
Primary effusion lymphoma
Large B-cell lymphoma arising in HHV8-associated multicentric Castleman's disease
Burkitt lymphoma/leukemia
< 1% of lymphomas in the United States
Round lymphoid cells of intermediate size with several nucleoli, starry-sky appearance by diffuse spread with interspersed apoptosis
CD10, surface Ig
Five-year survival rate 50%
Endemic in Africa, sporadic elsewhere, more common in immunocompromised and children, often visceral involvement, highly aggressive
T-cell prolymphocytic leukemia
T-cell large granular lymphocyte leukemia
Aggressive NK cell leukemia
Adult T-cell leukemia/lymphoma
Extranodal NK/T-cell lymphoma, nasal type
Enteropathy-associated T-cell lymphoma
Hepatosplenic T-cell lymphoma
Blastic NK cell lymphoma
Mycosis fungoides/Sézary syndrome
Most common cutaneous lymphoid malignancy
Usually small lymphoid cells with convoluted nuclei that often infiltrate the epidermis, creating Pautrier microabscesseses
CD4
5-year survival 75%
Localized or more generalized skin symptoms, generally indolent, in a more aggressive variant, Sézary's disease, skin erythema and peripheral blood involvement
Primary cutaneous CD30-positive T-cell lymphoproliferative disorders
Primary cutaneous anaplastic large cell lymphoma
Lymphomatoid papulosis
Peripheral T-cell lymphoma not otherwise specified
Most common T cell lymphoma
Variable, usually a mix small to large lymphoid cells with irregular nuclear contours
CD3
Probably consists of several rare tumor types, often disseminated and generally aggressive
Angioimmunoblastic T-cell lymphoma
Anaplastic large cell lymphoma: ALK-positive and ALK-negative types
Breast plant-associated anaplastic large cell lymphoma
B-lymphoblastic leukemia/lymphoma not otherwise specified
B-lymphoblastic leukemia/lymphoma with recurrent genetic abnormalities
T-lymphoblastic leukemia/lymphoma
15% of childhood acute lymphoblastic leukemia and 90% of lymphoblastic lymphoma.
Lymphoblasts with irregular nuclear contours, condensed chromatin, small nucleoli and scant cytoplasm without granules
TdT, CD2, CD7
It often presents as a mediastinal mass because of involvement of the thymus. It is highly associated with NOTCH1 mutations, and is most common in adolescent males.
Classical Hodgkin lymphomas:
Nodular sclerosis form of Hodgkin lymphoma
Most common type of Hodgkin lymphoma
Reed–Sternberg cell variants and inflammation, usually broad sclerotic bands that consist of collagen
CD15, CD30
Most common in young adults, often arises in the mediastinum or cervical lymph nodes
Mixed cellularity Hodgkin lymphoma
Second-most common form of Hodgkin lymphoma
Many classic Reed–Sternberg cells and inflammation
CD15, CD30
Most common in men, more likely to be diagnosed at advanced stages than the nodular sclerosis form Epstein–Barr virus involved in 70% of cases
Lymphocyte-rich
Lymphocyte depleted or not depleted
Nodular lymphocyte-predominant Hodgkin lymphoma
Associated with a primary immune disorder
Associated with the human immunodeficiency virus (HIV)
Post-transplant
Associated with methotrexate therapy
Primary central nervous system lymphoma occurs most often in immunocompromised patients, in particular those with AIDS, but it can occur in the immunocompetent, as well. It has a poor prognosis, particularly in those with AIDS. Treatment can consist of corticosteroids, radiotherapy, and chemotherapy, often with methotrexate.
Previous classifications
Several previous classifications have been used, including Rappaport 1956, Lennert/Kiel 1974, BNLI, Working formulation (1982), and REAL (1994).
The Working Formulation of 1982 was a classification of non-Hodgkin lymphoma. It excluded the Hodgkin lymphomas and divided the remaining lymphomas into four grades (low, intermediate, high, and miscellaneous) related to prognosis, with some further subdivisions based on the size and shape of affected cells. This purely histological classification included no information about cell surface markers or genetics and made no distinction between T-cell lymphomas and B-cell lymphomas. It was widely accepted at the time of its publication but by 2004 was obsolete.
In 1994, the Revised European-American Lymphoma (REAL) classification applied immunophenotypic and genetic features in identifying distinct clinicopathologic entities among all the lymphomas except Hodgkin lymphoma. For coding purposes, the ICD-O (codes 9590–9999) and ICD-10 (codes C81-C96) are available.
Staging
After a diagnosis and before treatment, cancer is staged. This refers to determining if the cancer has spread, and if so, whether locally or to distant sites. Staging is reported as a grade between I (confined) and IV (spread). The stage of a lymphoma helps predict a patient's prognosis and is used to help select the appropriate therapy.
The Ann Arbor staging system is routinely used for staging of both HL and NHL. In this staging system, stage I represents localized disease contained within a lymph node group, II represents the presence of lymphoma in two or more lymph nodes groups, III represents spread of the lymphoma to lymph nodes groups on both sides of the diaphragm, and IV indicates spread to tissue outside the lymphatic system. Different suffixes imply the involvement of different organs, for example, S for the spleen and H for the liver. Extra-lymphatic involvement is expressed with the letter E. In addition, the presence of B symptoms (one or more of the following: unintentional loss of 10% body weight in the last 6 months, night sweats, or persistent fever of 38 °C or more) or their absence is expressed with B or A, respectively.
CT scan or PET scan imaging modalities are used to stage cancer. PET scanning is advised for fluorodeoxyglucose-avid lymphomas, such as Hodgkin lymphoma, as a staging tool that can even replace bone marrow biopsy. For other lymphomas, CT scanning is recommended for staging.
Age and poor performance status are other established poor prognostic factors. This means that people who are elderly or too sick to take care of themselves are more likely to be killed by lymphoma than others.
Differential diagnosis
Certain lymphomas (extranodal NK/T-cell lymphoma, nasal type and type II enteropathy-associated T-cell lymphoma) can be mimicked by two benign diseases that involve the excessive proliferation of nonmalignant NK cells in the GI tract, natural killer cell enteropathy, a disease wherein NK cell infiltrative lesions occur in the intestine, colon, stomach, or esophagus, and lymphomatoid gastropathy, a disease wherein these cells' infiltrative lesions are limited to the stomach. These diseases do not progress to cancer, may regress spontaneously and do not respond to, and do not require, chemotherapy or other lymphoma treatments.
Treatment
Prognoses and treatments are different for HL and between all the different forms of NHL, and also depend on the grade of tumour, referring to how quickly a cancer replicates. Paradoxically, high-grade lymphomas are more readily treated and have better prognoses: Burkitt lymphoma, for example, is a high-grade tumour known to double within days, and is highly responsive to treatment.
Low-grade
Many low-grade lymphomas remain indolent (growing slowly or not at all) for many years – sometimes, for the rest of the person's life. With an indolent lymphoma, such as follicular lymphoma, watchful waiting is often the initial course of action, because monitoring is less risky and less harmful than early treatment.
If a low-grade lymphoma becomes symptomatic, radiotherapy or chemotherapy are the treatments of choice. Although these treatments do not permanently cure the lymphoma, they can alleviate the symptoms, particularly painful lymphadenopathy. People with these types of lymphoma can live near-normal lifespans, even though the disease is technically incurable.
Some centers advocate the use of single agent rituximab in the treatment of follicular lymphoma rather than the wait-and-watch approach. Watchful waiting is not a desirable strategy for everyone, as it leads to significant distress and anxiety in some people. It has been called "watch and worry".
High-grade
Treatment of some other, more aggressive, forms of lymphoma can result in a cure in the majority of cases, but the prognosis for people with a poor response to therapy is worse. Treatment for these types of lymphoma typically consists of aggressive chemotherapy, including the CHOP or R-CHOP regimen. A number of people are cured with first-line chemotherapy. Most relapses occur within the first two years, and the relapse risk drops significantly thereafter. For people who relapse, high-dose chemotherapy followed by autologous stem cell transplantation is a proven approach.
The treatment of side effects is also important as they can occur due to the chemotherapy or the stem cell transplantation. It was evaluated whether mesenchymal stromal cells can be used for the treatment and prophylaxis of graft-versus-host diseases. The evidence is very uncertain about the therapeutic effect of mesenchymal stromal cells to treat graft-versus-host diseases on the all-cause mortality and complete disappear of chronic acute graft-versus-host diseases. Mesenchymal stromal cells may result in little to no difference in the all-cause mortality, relapse of malignant disease and incidence of acute and chronic graft-versus-host diseases if they are used for prophylactic reason. Moreover, it was seen that platelet transfusions for people undergoing a chemotherapy or a stem cell transplantation for the prevention of bleeding events had different effects on the number of participants with a bleeding event, the number of days on which a bleeding occurred, the mortality secondary to bleeding and the number of platelet transfusions depending on the way they were used (therapeutic, depending on a threshold, different dose schedules or prophylactic).
Four chimeric antigen receptor T cell therapies are FDA-approved for non-Hodgkin lymphoma, including lisocabtagene maraleucel (for relapsed or refractory large B-cell lymphoma with two failed systemic treatments), axicabtagene ciloleucel, tisagenlecleucel (for large B-cell lymphoma), and brexucabtagene autoleucel (for mantle cell lymphoma). These therapies come with certification and other restrictions.
Hodgkin lymphoma
Hodgkin lymphoma typically is treated with radiotherapy alone, as long as it is localized.
Advanced Hodgkin disease requires systemic chemotherapy, sometimes combined with radiotherapy. Chemotherapy used includes the ABVD regimen, which is commonly used in the United States. Other regimens used in the management of Hodgkin lymphoma include BEACOPP and Stanford V. Considerable controversy exists regarding the use of ABVD or BEACOPP. Briefly, both regimens are effective, but BEACOPP is associated with more toxicity. Encouragingly, a significant number of people who relapse after ABVD can still be salvaged by stem cell transplant.
Scientists evaluated whether positron emission tomography scans between the chemotherapy cycles can be used to make assumptions about the survival. The evidence is very uncertain about the effect of negative (= good prognosis) or positive (= bad prognosis) interim PET scan results on the progression-free survival. Negative interim PET scan results may result in an increase in progression-free survival compared if the adjusted result was measured. Negative interim PET scan results probably result in a large increase in the overall survival compared to those with a positive interim PET scan result.
Current research evaluated whether Nivolumab can be used for the treatment of a Hodgkin's lymphoma. The evidence is very uncertain about the effect of Nivolumab for patients with a Hodgkin's lymphoma on the overall survival, the quality of life, the survival without a progression, the response rate (=complete disappear) and grade 3 or 4 serious adverse events.
Palliative care
Palliative care, a specialized medical care focused on the symptoms, pain, and stress of a serious illness, is recommended by multiple national cancer treatment guidelines as an accompaniment to curative treatments for people with lymphoma. It is used to address both the direct symptoms of lymphoma and many unwanted side effects that arise from treatments. Palliative care can be especially helpful for children who develop lymphoma, helping both children and their families deal with the physical and emotional symptoms of the disease. For these reasons, palliative care is especially important for people requiring bone marrow transplants.
Supportive treatment
Adding physical exercises to the standard treatment for adult patients with haematological malignancies like lymphomas may result in little to no difference in the mortality, the quality of life and the physical functioning. These exercises may result in a slight reduction in depression. Furthermore, aerobic physical exercises probably reduce fatigue. The evidence is very uncertain about the effect on anxiety and serious adverse events.
Prognosis
Epidemiology
Lymphoma is the most common form of hematological malignancy, or "blood cancer", in the developed world.
Taken together, lymphomas represent 5.3% of all cancers (excluding simple basal cell and squamous cell skin cancers) in the United States and 55.6% of all blood cancers.
According to the U.S. National Institutes of Health, lymphomas account for about 5%, and Hodgkin lymphoma in particular accounts for less than 1% of all cases of cancer in the United States.
Because the whole lymphatic system is part of the body's immune system, people with a weakened immune system such as from HIV infection or from certain drugs or medication also have a higher number of cases of lymphoma.
History
Thomas Hodgkin published the first description of lymphoma in 1832, specifically of the form named after him. Since then, many other forms of lymphoma have been described.
The term "lymphoma" is from Latin ("water") and from Greek -oma ("morbid growth, tumor").
Research
The two types of lymphoma research are clinical or translational research and basic research. Clinical/translational research focuses on studying the disease in a defined and generally immediately applicable way, such as testing a new drug in people. Studies may focus on effective means of treatment, better ways of treating the disease, improving the quality of life for people, or appropriate care in remission or after cures. Hundreds of clinical trials are being planned or conducted at any given time.
Basic science research studies the disease process at a distance, such as seeing whether a suspected carcinogen can cause healthy cells to turn into lymphoma cells in the laboratory or how the DNA changes inside lymphoma cells as the disease progresses. The results from basic research studies are generally less immediately useful to people with the disease, but can improve scientists' understanding of lymphoma and form the foundation for future, more effective treatments.
Other animals
| Biology and health sciences | Cancer | null |
71425 | https://en.wikipedia.org/wiki/Lymphatic%20system | Lymphatic system | The lymphatic system, or lymphoid system, is an organ system in vertebrates that is part of the immune system and complementary to the circulatory system. It consists of a large network of lymphatic vessels, lymph nodes, lymphoid organs, lymphatic tissue and lymph. Lymph is a clear fluid carried by the lymphatic vessels back to the heart for re-circulation. The Latin word for lymph, , refers to the deity of fresh water, "Lympha".
Unlike the circulatory system that is a closed system, the lymphatic system is open. The human circulatory system processes an average of 20 litres of blood per day through capillary filtration, which removes plasma from the blood. Roughly 17 litres of the filtered blood is reabsorbed directly into the blood vessels, while the remaining three litres are left in the interstitial fluid. One of the main functions of the lymphatic system is to provide an accessory return route to the blood for the surplus three litres.
The other main function is that of immune defense. Lymph is very similar to blood plasma, in that it contains waste products and cellular debris, together with bacteria and proteins. The cells of the lymph are mostly lymphocytes. Associated lymphoid organs are composed of lymphoid tissue, and are the sites either of lymphocyte production or of lymphocyte activation. These include the lymph nodes (where the highest lymphocyte concentration is found), the spleen, the thymus, and the tonsils. Lymphocytes are initially generated in the bone marrow. The lymphoid organs also contain other types of cells such as stromal cells for support. Lymphoid tissue is also associated with mucosas such as mucosa-associated lymphoid tissue (MALT).
Fluid from circulating blood leaks into the tissues of the body by capillary action, carrying nutrients to the cells. The fluid bathes the tissues as interstitial fluid, collecting waste products, bacteria, and damaged cells, and then drains as lymph into the lymphatic capillaries and lymphatic vessels. These vessels carry the lymph throughout the body, passing through numerous lymph nodes which filter out unwanted materials such as bacteria and damaged cells. Lymph then passes into much larger lymph vessels known as lymph ducts. The right lymphatic duct drains the right side of the region and the much larger left lymphatic duct, known as the thoracic duct, drains the left side of the body. The ducts empty into the subclavian veins to return to the blood circulation. Lymph is moved through the system by muscle contractions. In some vertebrates, a lymph heart is present that pumps the lymph to the veins.
The lymphatic system was first described in the 17th century independently by Olaus Rudbeck and Thomas Bartholin.
Structure
The lymphatic system consists of a conducting network of lymphatic vessels, lymphoid organs, lymphoid tissues, and the circulating lymph.
Primary lymphoid organs
The primary (or central) lymphoid organs, including the thymus, bone marrow, fetal liver and yolk sac, are responsible for generating lymphocytes from immature progenitor cells in the absence of antigens. The thymus and the bone marrow constitute the primary lymphoid organs involved in the production and early clonal selection of lymphocyte tissues.
Avian species's primary lymphoid organs include the bone marrow, thymus, bursa of Fabricius, and yolk sac.
Bone marrow
Bone marrow is responsible for both the creation of T cell precursors and the production and maturation of B cells, which are important cell types of the immune system. From the bone marrow, B cells immediately join the circulatory system and travel to secondary lymphoid organs in search of pathogens. T cells, on the other hand, travel from the bone marrow to the thymus, where they develop further and mature. Mature T cells then join B cells in search of pathogens. The other 95% of T cells begin a process of apoptosis, a form of programmed cell death.
Thymus
The thymus increases in size from birth in response to postnatal antigen stimulation. It is most active during the neonatal and pre-adolescent periods. The thymus is located between the inferior neck and the superior thorax. At puberty, by the early teens, the thymus begins to atrophy and regress, with adipose tissue mostly replacing the thymic stroma. However, residual T cell lymphopoiesis continues throughout adult life, providing some immune response. The thymus is where the T lymphocytes mature and become immunocompetent. The loss or lack of the thymus results in severe immunodeficiency and subsequent high susceptibility to infection. In most species, the thymus consists of lobules divided by septa which are made up of epithelium which is often considered an epithelial organ. T cells mature from thymocytes, proliferate, and undergo a selection process in the thymic cortex before entering the medulla to interact with epithelial cells.
Research on bony fish showed a buildup of T cells in the thymus and spleen of lymphoid tissues in salmon and showed that there are not many T cells in non-lymphoid tissues.
The thymus provides an inductive environment for the development of T cells from hematopoietic progenitor cells. In addition, thymic stromal cells allow for the selection of a functional and self-tolerant T cell repertoire. Therefore, one of the most important roles of the thymus is the induction of central tolerance. However, the thymus is not where the infection is fought, as the T cells have yet to become immunocompetent.
Secondary lymphoid organs
The secondary (or peripheral) lymphoid organs, which include lymph nodes and the spleen, maintain mature naive lymphocytes and initiate an adaptive immune response. The secondary lymphoid organs are the sites of lymphocyte activation by antigens. Activation leads to clonal expansion, and affinity maturation. Mature lymphocytes recirculate between the blood and the secondary lymphoid organs until they encounter their specific antigen.
Spleen
The main functions of the spleen are:
to produce immune cells to fight antigens
to remove particulate matter and aged blood cells, mainly red blood cells
to produce blood cells during fetal life.
The spleen synthesizes antibodies in its white pulp and removes antibody-coated bacteria and antibody-coated blood cells by way of blood and lymph node circulation. The white pulp of the spleen provides immune function due to the lymphocytes that are housed there. The spleen also consists of red pulp which is responsible for getting rid of aged red blood cells, as well as pathogens. This is carried out by macrophages present in the red pulp. A study published in 2009 using mice found that the spleen contains, in its reserve, half of the body's monocytes within the red pulp. These monocytes, upon moving to injured tissue (such as the heart), turn into dendritic cells and macrophages while promoting tissue healing. The spleen is a center of activity of the mononuclear phagocyte system and can be considered analogous to a large lymph node, as its absence causes a predisposition to certain infections. Notably, the spleen is important for a multitude of functions. The spleen removes pathogens and old erythrocytes from the blood (red pulp) and produces lymphocytes for immune response (white pulp). The spleen also is responsible for recycling some erythrocytes components and discarding others. For example, hemoglobin is broken down into amino acids that are reused.
Research on bony fish has shown that a high concentration of T cells are found in the white pulp of the spleen.
Like the thymus, the spleen has only efferent lymphatic vessels. Both the short gastric arteries and the splenic artery supply it with blood. The germinal centers are supplied by arterioles called penicilliary radicles.
In the human until the fifth month of prenatal development, the spleen creates red blood cells; after birth, the bone marrow is solely responsible for hematopoiesis. As a major lymphoid organ and a central player in the reticuloendothelial system, the spleen retains the ability to produce lymphocytes. The spleen stores red blood cells and lymphocytes. It can store enough blood cells to help in an emergency. Up to 25% of lymphocytes can be stored at any one time.
Lymph nodes
A lymph node is an organized collection of lymphoid tissue, through which the lymph passes on its way back to the blood. Lymph nodes are located at intervals along the lymphatic system. Several afferent lymph vessels bring in lymph, which percolates through the substance of the lymph node, and is then drained out by an efferent lymph vessel. Of the nearly 800 lymph nodes in the human body, about 300 are located in the head and neck. Many are grouped in clusters in different regions, as in the underarm and abdominal areas. Lymph node clusters are commonly found at the proximal ends of limbs (groin, armpits) and in the neck, where lymph is collected from regions of the body likely to sustain pathogen contamination from injuries. Lymph nodes are particularly numerous in the mediastinum in the chest, neck, pelvis, axilla, inguinal region, and in association with the blood vessels of the intestines.
The substance of a lymph node consists of lymphoid follicles in an outer portion called the cortex. The inner portion of the node is called the medulla, which is surrounded by the cortex on all sides except for a portion known as the hilum. The hilum presents as a depression on the surface of the lymph node, causing the otherwise spherical lymph node to be bean-shaped or ovoid. The efferent lymph vessel directly emerges from the lymph node at the hilum. The arteries and veins supplying the lymph node with blood enter and exit through the hilum. The region of the lymph node called the paracortex immediately surrounds the medulla. Unlike the cortex, which has mostly immature T cells, or thymocytes, the paracortex has a mixture of immature and mature T cells. Lymphocytes enter the lymph nodes through specialised high endothelial venules found in the paracortex.
A lymph follicle is a dense collection of lymphocytes, the number, size, and configuration of which change in accordance with the functional state of the lymph node. For example, the follicles expand significantly when encountering a foreign antigen. The selection of B cells, or B lymphocytes, occurs in the germinal centre of the lymph nodes.
Secondary lymphoid tissue provides the environment for the foreign or altered native molecules (antigens) to interact with the lymphocytes. It is exemplified by the lymph nodes, and the lymphoid follicles in tonsils, Peyer's patches, spleen, adenoids, skin, etc. that are associated with the mucosa-associated lymphoid tissue (MALT).
In the gastrointestinal wall, the appendix has mucosa resembling that of the colon, but here it is heavily infiltrated with lymphocytes.
Tertiary lymphoid organs
Tertiary lymphoid organs (TLOs) are abnormal lymph node-like structures that form in peripheral tissues at sites of chronic inflammation, such as chronic infection, transplanted organs undergoing graft rejection, some cancers, and autoimmune and autoimmune-related diseases. TLOs are often characterized by CD20+ B cell zone which is surrounded by CD3+ T cell zone, similar to the lymph follicles in secondary lymphoid organs (SLOs) and are regulated differently from the normal process whereby lymphoid tissues are formed during ontogeny, being dependent on cytokines and hematopoietic cells, but still drain interstitial fluid and transport lymphocytes in response to the same chemical messengers and gradients. Mature TLOs often have an active germinal center, surrounded by a network of follicular dendritic cells (FDCs). Although the specific composition of TLSs may vary, within the T cell compartment, the dominant subset of T cells is CD4+ T follicular helper (TFH) cells, but certain number of CD8+ cytotoxic T cells, CD4+ T helper 1 (TH1) cells, and regulatory T cells (Tregs) can also be found within the T cell zone. The B cell zone contains two main areas. The mantle is located at the periphery and composed of naive immunoglobulin D (IgD)+ B cells surrounding the germinal centre. The latter is defined by the presence of proliferating Ki67+CD23+ B cells and a CD21+ FDC network, as observed in SLOs. TLOs typically contain far fewer lymphocytes, and assume an immune role only when challenged with antigens that result in inflammation. They achieve this by importing the lymphocytes from blood and lymph.
According to the composition and activation status of the cells within the lymphoid structures, at least three organizational levels of TLOs have been described. The formation of TLOs starts with the aggregating of lymphoid cells and occasional DCs but FDCs are lacking at this stage. The next stage is immature TLOs, also known as primary follicle-like TLS, which have increased number of T cells and B cells with distinct T cell and B cell zones as well as the formation of FDCs network, but without germinal centres. Finally, fully mature (also known as secondary follicle-like) TLOs often have active germinal centres and high endothelial venules(HEVs), demonstrating a functional capacity by promoting T cell and B cell activation then leading to expansion of TLS through cell proliferation and recruitment. During TLS formation, T cells and B cells are separated into two different but adjacent zones, with some cells having the ability to migrate from one to the other, which is a crucial step in the development of an effective and coordinated immune response.
TLOs are now being identified to have an important role in the immune response to cancer and to be a prognostic marker for immunotherapy. TLOs have been reported to present in different cancer types such as melanoma, non-small cell lung cancer and colorectal cancer (reviewed in ) as well as glioma. TLOs are also been seen as a read-out of treatment efficacy. For example, in patients with pancreatic ductal adenocarcinoma (PDAC), vaccination led to formation of TLOs in responders. Within these patients, lymphocytes in TLOs displayed an activated phenotype and in vitro experiments showed their capacity to perform effector functions. Patients with the presence of TLOs tend to have a better prognosis, even though some certain cancer types showed an opposite effect. Besides, TLOs that with an active germinal center seem to show a better prognosis than those with TLOs without a germinal center. The reason that these patients tend to live longer is that immune response against tumor can be promoted by TLOs. TLOs may also enhance anti-tumor response when patients are treated with immunotherapy such as immune checkpoint blockade treatment.
Other lymphoid tissue
Lymphoid tissue associated with the lymphatic system is concerned with immune functions in defending the body against infections and the spread of tumours. It consists of connective tissue formed of reticular fibers, with various types of leukocytes (white blood cells), mostly lymphocytes enmeshed in it, through which the lymph passes. Regions of the lymphoid tissue that are densely packed with lymphocytes are known as lymphoid follicles. Lymphoid tissue can either be structurally well organized as lymph nodes or may consist of loosely organized lymphoid follicles known as the mucosa-associated lymphoid tissue (MALT).
The central nervous system also has lymphatic vessels. The search for T cell gateways into and out of the meninges uncovered functional meningeal lymphatic vessels lining the dural sinuses, anatomically integrated into the membrane surrounding the brain.
Lymphatic vessels
The lymphatic vessels, also called lymph vessels, are thin-walled vessels that conduct lymph between different parts of the body. They include the tubular vessels of the lymph capillaries, and the larger collecting vessels – the right lymphatic duct and the thoracic duct (the left lymphatic duct). The lymph capillaries are mainly responsible for the absorption of interstitial fluid from the tissues, while lymph vessels propel the absorbed fluid forward into the larger collecting ducts, where it ultimately returns to the bloodstream via one of the subclavian veins.
The tissues of the lymphatic system are responsible for maintaining the balance of the body fluids. Its network of capillaries and collecting lymphatic vessels work to efficiently drain and transport extravasated fluid, along with proteins and antigens, back to the circulatory system. Numerous intraluminal valves in the vessels ensure a unidirectional flow of lymph without reflux. Two valve systems, a primary and a secondary valve system, are used to achieve this unidirectional flow. The capillaries are blind-ended, and the valves at the ends of capillaries use specialised junctions together with anchoring filaments to allow a unidirectional flow to the primary vessels. When interstitial fluid increases, it causes swelling that stretches collagen fibers anchored to adjacent connective tissue, in turn opening the unidirectional valves at the ends of these capillaries, facilitating the entry and subsequent drainage of excess lymph fluid. The collecting lymphatics, however, act to propel the lymph by the combined actions of the intraluminal valves and lymphatic muscle cells.
Development
Lymphatic tissues begin to develop by the end of the fifth week of embryonic development.
Lymphatic vessels develop from lymph sacs that arise from developing veins, which are derived from mesoderm.
The first lymph sacs to appear are the paired jugular lymph sacs at the junction of the internal jugular and subclavian veins.
From the jugular lymph sacs, lymphatic capillary plexuses spread to the thorax, upper limbs, neck, and head.
Some of the plexuses enlarge and form lymphatic vessels in their respective regions. Each jugular lymph sac retains at least one connection with its jugular vein, the left one developing into the superior portion of the thoracic duct.
The spleen develops from mesenchymal cells between layers of the dorsal mesentery of the stomach.
The thymus arises as an outgrowth of the third pharyngeal pouch.
Function
The lymphatic system has multiple interrelated functions:
It is responsible for the removal of interstitial fluid from tissues
It absorbs and transports fatty acids and fats as chyle from the digestive system
It transports white blood cells to and from the lymph nodes into the bones
The lymph transports antigen-presenting cells, such as dendritic cells, to the lymph nodes where an immune response is stimulated.
Fat absorption
Lymph vessels called lacteals are at the beginning of the gastrointestinal tract, predominantly in the small intestine. While most other nutrients absorbed by the small intestine are passed on to the portal venous system to drain via the portal vein into the liver for processing, fats (lipids) are passed on to the lymphatic system to be transported to the blood circulation via the thoracic duct. (There are exceptions, for example medium-chain triglycerides are fatty acid esters of glycerol that passively diffuse from the GI tract to the portal system.) The enriched lymph originating in the lymphatics of the small intestine is called chyle. The nutrients that are released into the circulatory system are processed by the liver, having passed through the systemic circulation.
Immune function
The lymphatic system plays a major role in the body's immune system, as the primary site for cells relating to adaptive immune system including T-cells and B-cells.
Cells in the lymphatic system react to antigens presented or found by the cells directly or by other dendritic cells.
When an antigen is recognized, an immunological cascade begins involving the activation and recruitment of more and more cells, the production of antibodies and cytokines and the recruitment of other immunological cells such as macrophages.
Clinical significance
The study of lymphatic drainage of various organs is important in the diagnosis, prognosis, and treatment of cancer. The lymphatic system, because of its closeness to many tissues of the body, is responsible for carrying cancerous cells between the various parts of the body in a process called metastasis. The intervening lymph nodes can trap the cancer cells. If they are not successful in destroying the cancer cells the nodes may become sites of secondary tumours.
The lymphatic system (LS) comprises lymphoid organs and a network of vessels responsible for transporting interstitial fluid, antigens, lipids, cholesterol, immune cells, and other materials throughout the body. Dysfunction or abnormal development of the LS has been linked to numerous diseases, making it critical for fluid balance, immune cell trafficking, and inflammation control. Recent advancements, including single-cell technologies, clinical imaging, and biomarker discovery, have improved the ability to study and understand the LS, providing potential pathways for disease prevention and treatment. Studies have shown that the lymphatic system also plays a role in modulating immune responses, with dysfunction linked to chronic inflammatory and autoimmune conditions, as well as cancer progression.
Enlarged lymph nodes
Lymphadenopathy refers to one or more enlarged lymph nodes. Small groups or individually enlarged lymph nodes are generally reactive in response to infection or inflammation. This is called local lymphadenopathy. When many lymph nodes in different areas of the body are involved, this is called generalised lymphadenopathy. Generalised lymphadenopathy may be caused by infections such as infectious mononucleosis, tuberculosis and HIV, connective tissue diseases such as SLE and rheumatoid arthritis, and cancers, including both cancers of tissue within lymph nodes, discussed below, and metastasis of cancerous cells from other parts of the body, that have arrived via the lymphatic system.
Lymphedema
Lymphedema is the swelling caused by the accumulation of lymph, which may occur if the lymphatic system is damaged or has malformations. It usually affects limbs, though the face, neck and abdomen may also be affected. In an extreme state, called elephantiasis, the edema progresses to the extent that the skin becomes thick with an appearance similar to the skin on elephant limbs.
Causes are unknown in most cases, but sometimes there is a previous history of severe infection, usually caused by a parasitic disease, such as lymphatic filariasis.
Lymphangiomatosis is a disease involving multiple cysts or lesions formed from lymphatic vessels.
Lymphedema can also occur after surgical removal of lymph nodes in the armpit (causing the arm to swell due to poor lymphatic drainage) or groin (causing swelling of the leg). Conventional treatment is by manual lymphatic drainage and compression garments. Two drugs for the treatment of lymphedema are in clinical trials: Lymfactin and Ubenimex/Bestatin. There is no evidence to suggest that the effects of manual lymphatic drainage are permanent.
Cancer
Cancer of the lymphatic system can be primary or secondary. Lymphoma refers to cancer that arises from lymphatic tissue. Lymphoid leukaemias and lymphomas are now considered to be tumours of the same type of cell lineage. They are called "leukaemia" when in the blood or marrow and "lymphoma" when in lymphatic tissue. They are grouped together under the name "lymphoid malignancy".
Lymphoma is generally considered as either Hodgkin lymphoma or non-Hodgkin lymphoma. Hodgkin lymphoma is characterised by a particular type of cell, called a Reed–Sternberg cell, visible under microscope. It is associated with past infection with the Epstein–Barr virus, and generally causes a painless "rubbery" lymphadenopathy. It is staged, using Ann Arbor staging. Chemotherapy generally involves the ABVD and may also involve radiotherapy. Non-Hodgkin lymphoma is a cancer characterised by increased proliferation of B-cells or T-cells, generally occurs in an older age group than Hodgkin lymphoma. It is treated according to whether it is high-grade or low-grade, and carries a poorer prognosis than Hodgkin lymphoma.
Lymphangiosarcoma is a malignant soft tissue tumour, whereas lymphangioma is a benign tumour occurring frequently in association with Turner syndrome. Lymphangioleiomyomatosis is a benign tumour of the smooth muscles of the lymphatics that occurs in the lungs.
Lymphoid leukaemia is another form of cancer where the host is devoid of different lymphatic cells.
Other
Castleman's disease
Chylothorax
Kawasaki disease
Kikuchi disease
Lipedema
Lymphangitis
Lymphatic filariasis
Lymphocytic choriomeningitis
Solitary lymphatic nodule
History
Hippocrates, in the 5th century BC, was one of the first people to mention the lymphatic system. In his work On Joints, he briefly mentioned the lymph nodes in one sentence. Rufus of Ephesus, a Roman physician, identified the axillary, inguinal and mesenteric lymph nodes as well as the thymus during the 1st to 2nd century AD. The first mention of lymphatic vessels was in the 3rd century BC by Herophilos, a Greek anatomist living in Alexandria, who incorrectly concluded that the "absorptive veins of the lymphatics," by which he meant the lacteals (lymph vessels of the intestines), drained into the hepatic portal veins, and thus into the liver. The findings of Ruphus and Herophilos were further propagated by the Greek physician Galen, who described the lacteals and mesenteric lymph nodes which he observed in his dissection of apes and pigs in the 2nd century AD.
In the mid 16th century, Gabriele Falloppio (discoverer of the fallopian tubes), described what is now known as the lacteals as "coursing over the intestines full of yellow matter." In about 1563 Bartolomeo Eustachi, a professor of anatomy, described the thoracic duct in horses as vena alba thoracis. The next breakthrough came when in 1622 a physician, Gaspare Aselli, identified lymphatic vessels of the intestines in dogs and termed them venae albae et lacteae, which are now known as simply the lacteals. The lacteals were termed the fourth kind of vessels (the other three being the artery, vein and nerve, which was then believed to be a type of vessel), and disproved Galen's assertion that chyle was carried by the veins. But, he still believed that the lacteals carried the chyle to the liver (as taught by Galen). He also identified the thoracic duct but failed to notice its connection with the lacteals. This connection was established by Jean Pecquet in 1651, who found a white fluid mixing with blood in a dog's heart. He suspected that fluid to be chyle as its flow increased when abdominal pressure was applied. He traced this fluid to the thoracic duct, which he then followed to a chyle-filled sac he called the chyli receptaculum, which is now known as the cisternae chyli; further investigations led him to find that lacteals' contents enter the venous system via the thoracic duct. Thus, it was proven convincingly that the lacteals did not terminate in the liver, thus disproving Galen's second idea: that the chyle flowed to the liver. Johann Veslingius drew the earliest sketches of the lacteals in humans in 1641.
The idea that blood recirculates through the body rather than being produced anew by the liver and the heart was first accepted as a result of works of William Harvey—a work he published in 1628. In 1652, Olaus Rudbeck (1630–1702) discovered certain transparent vessels in the liver that contained clear fluid (and not white), and thus named them hepatico-aqueous vessels. He also learned that they emptied into the thoracic duct and that they had valves. He announced his findings in the court of Queen Christina of Sweden, but did not publish his findings for a year, and in the interim similar findings were published by Thomas Bartholin, who additionally published that such vessels are present everywhere in the body, not just in the liver. He is also the one to have named them "lymphatic vessels." This had resulted in a bitter dispute between one of Bartholin's pupils, Martin Bogdan, and Rudbeck, whom he accused of plagiarism.
Galen's ideas prevailed in medicine until the 17th century. It was thought that blood was produced by the liver from chyle contaminated with ailments by the intestine and stomach, to which various spirits were added by other organs, and that this blood was consumed by all the organs of the body. This theory required that the blood be consumed and produced many times over. Even in the 17th century, his ideas were defended by some physicians.
Alexander Monro, of the University of Edinburgh Medical School, was the first to describe the function of the lymphatic system in detail.
UVA School of Medicine researchers Jonathan Kipnis and Antoine Louveau discovered previously unknown vessels connecting the human brain directly to the lymphatic system. The discovery "redrew the map" of the lymphatic system, rewrote medical textbooks, and struck down long-held beliefs about how the immune system functions in the brain. The discovery may help greatly in combating neurological diseases from multiple sclerosis to Alzheimer's disease.
Etymology
Lymph originates in the Classical Latin word "water", which is also the source of the English word limpid. The spelling with y and ph was influenced by folk etymology with Greek () "nymph".
The adjective used for the lymph-transporting system is lymphatic. The adjective used for the tissues where lymphocytes are formed is lymphoid. Lymphatic comes from the Latin word , meaning "connected to water."
| Biology and health sciences | Circulatory system | null |
71435 | https://en.wikipedia.org/wiki/Universal%20Turing%20machine | Universal Turing machine | In computer science, a universal Turing machine (UTM) is a Turing machine capable of computing any computable sequence, as described by Alan Turing in his seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible. He suggested that we may compare a human in the process of computing a real number to a machine which is only capable of a finite number of conditions ; which will be called "-configurations". He then described the operation of such machine, as described below, and argued:
Turing introduced the idea of such a machine in 1936–1937.
Introduction
Martin Davis makes a persuasive argument that Turing's conception of what is now known as "the stored-program computer", of placing the "action table"—the instructions for the machine—in the same "memory" as the input data, strongly influenced John von Neumann's conception of the first American discrete-symbol (as opposed to analog) computer—the EDVAC. Davis quotes Time magazine to this effect, that "everyone who taps at a keyboard ... is working on an incarnation of a Turing machine", and that "John von Neumann [built] on the work of Alan Turing".
Davis makes a case that Turing's Automatic Computing Engine (ACE) computer "anticipated" the notions of microprogramming (microcode) and RISC processors. Donald Knuth cites Turing's work on the ACE computer as designing "hardware to facilitate subroutine linkage"; Davis also references this work as Turing's use of a hardware "stack".
As the Turing machine was encouraging the construction of computers, the UTM was encouraging the development of the fledgling computer sciences. An early, if not the first, assembler was proposed "by a young hot-shot programmer" for the EDVAC. Von Neumann's "first serious program ... [was] to simply sort data efficiently". Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine. Knuth furthermore states that
Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data.
Mathematical theory
With this encoding of action tables as strings, it becomes possible, in principle, for Turing machines to answer questions about the behaviour of other Turing machines. Most of these questions, however, are undecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as the Halting problem, was shown to be, in general, undecidable in Turing's original paper. Rice's theorem shows that any non-trivial question about the output of a Turing machine is undecidable.
A universal Turing machine can calculate any recursive function, decide any recursive language, and accept any recursively enumerable language. According to the Church–Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by an algorithm or an effective method of computation, for any reasonable definition of those terms. For these reasons, a universal Turing machine serves as a standard against which to compare computational systems, and a system that can simulate a universal Turing machine is called Turing complete.
An abstract version of the universal Turing machine is the universal function, a computable function which can be used to calculate any other computable function. The UTM theorem proves the existence of such a function.
Efficiency
Without loss of generality, the input of Turing machine can be assumed to be in the alphabet {0, 1}; any other finite alphabet can be encoded over {0, 1}. The behavior of a Turing machine M is determined by its transition function. This function can be easily encoded as a string over the alphabet {0, 1} as well. The size of the alphabet of M, the number of tapes it has, and the size of the state space can be deduced from the transition function's table. The distinguished states and symbols can be identified by their position, e.g. the first two states can by convention be the start and stop states. Consequently, every Turing machine can be encoded as a string over the alphabet {0, 1}. Additionally, we convene that every invalid encoding maps to a trivial Turing machine that immediately halts, and that every Turing machine can have an infinite number of encodings by padding the encoding with an arbitrary number of (say) 1's at the end, just like comments work in a programming language. It should be no surprise that we can achieve this encoding given the existence of a Gödel number and computational equivalence between Turing machines and μ-recursive functions. Similarly, our construction associates to every binary string α, a Turing machine Mα.
Starting from the above encoding, in 1966 F. C. Hennie and R. E. Stearns showed that given a Turing machine Mα that halts on input x within N steps, then there exists a multi-tape universal Turing machine that halts on inputs α, x (given on different tapes) in CN log N, where C is a machine-specific constant that does not depend on the length of the input x, but does depend on Ms alphabet size, number of tapes, and number of states. Effectively this is an simulation, using Donald Knuth's Big O notation. The corresponding result for space-complexity rather than time-complexity is that we can simulate in a way that uses at most CN cells at any stage of the computation, an simulation.
Smallest machines
When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated. Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols. He also showed that no universal Turing machine of one state could exist.
Marvin Minsky discovered a 7-state 4-symbol universal Turing machine in 1962 using 2-tag systems. Other small universal Turing machines have since been found by Yurii Rogozhin and others by extending this approach of tag system simulation. If we denote by (m, n) the class of UTMs with m states and n symbols the following tuples have been found: (15, 2), (9, 3), (6, 4), (5, 5), (4, 6), (3, 9), and (2, 18). Rogozhin's (4, 6) machine uses only 22 instructions, and no standard UTM of lesser descriptional complexity is known.
However, generalizing the standard Turing machine model admits even smaller UTMs. One such generalization is to allow an infinitely repeated word on one or both sides of the Turing machine input, thus extending the definition of universality and known as "semi-weak" or "weak" universality, respectively. Small weakly universal Turing machines that simulate the Rule 110 cellular automaton have been given for the (6, 2), (3, 3), and (2, 4) state-symbol pairs. The proof of universality for Wolfram's 2-state 3-symbol Turing machine further extends the notion of weak universality by allowing certain non-periodic initial configurations. Other variants on the standard Turing machine model that yield small UTMs include machines with multiple tapes or tapes of multiple dimension, and machines coupled with a finite automaton.
Machines with no internal states
If multiple heads are allowed on a Turing machine then no internal states are required; as "states" can be encoded in the tape. For example, consider a tape with 6 colours: 0, 1, 2, 0A, 1A, 2A. Consider a tape such as 0,0,1,2,2A,0,2,1 where a 3-headed Turing machine is situated over the triple (2,2A,0). The rules then convert any triple to another triple and move the 3-heads left or right. For example, the rules might convert (2,2A,0) to (2,1,0) and move the head left. Thus in this example, the machine acts like a 3-colour Turing machine with internal states A and B (represented by no letter). The case for a 2-headed Turing machine is very similar. Thus a 2-headed Turing machine can be Universal with 6 colours. It is not known what the smallest number of colours needed for a multi-headed Turing machine is or if a 2-colour Universal Turing machine is possible with multiple heads. It also means that rewrite rules are Turing complete since the triple rules are equivalent to rewrite rules. Extending the tape to two dimensions with a head sampling a letter and its 8 neighbours, only 2 colours are needed, as for example, a colour can be encoded in a vertical triple pattern such as 110.
Also, if the distance between the two heads is variable (the tape has "slack" between the heads), then it can simulate any Post tag system, some of which are universal.
Example of coding
For those who would undertake the challenge of designing a UTM exactly as Turing specified see the article by Davies in . Davies corrects the errors in the original and shows what a sample run would look like. He successfully ran a (somewhat simplified) simulation.
The following example is taken from . For more about this example, see Turing machine examples.
Turing used seven symbols { A, C, D, R, L, N, ; } to encode each 5-tuple; as described in the article Turing machine, his 5-tuples are only of types N1, N2, and N3. The number of each "configuration" (instruction, state) is represented by "D" followed by a unary string of A's, e.g. "q3" = DAAA. In a similar manner, he encodes the symbols blank as "D", the symbol "0" as "DC", the symbol "1" as DCC, etc. The symbols "R", "L", and "N" remain as is.
After encoding each 5-tuple is then "assembled" into a string in order as shown in the following table:
Finally, the codes for all four 5-tuples are strung together into a code started by ";" and separated by ";" i.e.:
This code he placed on alternate squares—the "F-squares" – leaving the "E-squares" (those liable to erasure) empty. The final assembly of the code on the tape for the U-machine consists of placing two special symbols ("e") one after the other, then the code separated out on alternate squares, and lastly the double-colon symbol "::" (blanks shown here with "." for clarity):
The U-machine's action-table (state-transition table) is responsible for decoding the symbols. Turing's action table keeps track of its place with markers "u", "v", "x", "y", "z" by placing them in "E-squares" to the right of "the marked symbol" – for example, to mark the current instruction z is placed to the right of ";" x' is keeping the place with respect to the current "mconfiguration" DAA. The U-machine's action table will shuttle these symbols around (erasing them and placing them in different locations) as the computation progresses:
Turing's action-table for his U-machine is very involved.
Roger Penrose provides examples of ways to encode instructions for the Universal machine using only binary symbols { 0, 1 }, or { blank, mark | }. Penrose goes further and writes out his entire U-machine code. He asserts that it truly is a U-machine code, an enormous number that spans almost 2 full pages of 1's and 0's.
Asperti and Ricciotti described a multi-tape UTM defined by composing elementary machines with very simple semantics, rather than explicitly giving its full action table. This approach was sufficiently modular to allow them to formally prove the correctness of the machine in the Matita proof assistant.
| Mathematics | Computability theory | null |
71469 | https://en.wikipedia.org/wiki/Barn%20%28unit%29 | Barn (unit) | A barn (symbol: b) is a metric unit of area equal to (100 fm2). This is equivalent to a square that is (10 fm) each side, or a circle of diameter approximately (11.28 fm).
Originally used in nuclear physics for expressing the cross sectional area of nuclei and nuclear reactions, today it is also used in all fields of high-energy physics to express the cross sections of any scattering process, and is best understood as a measure of the probability of interaction between small particles. A barn is approximately the cross-sectional area of a uranium nucleus. The barn is also the unit of area used in nuclear quadrupole resonance and nuclear magnetic resonance to quantify the interaction of a nucleus with an electric field gradient. While the barn never was an SI unit, the SI standards body acknowledged it in the 8th SI Brochure (superseded in 2019) due to its use in particle physics.
Etymology
During Manhattan Project research on the atomic bomb during World War II, American physicists Marshall Holloway and Charles P. Baker were working at Purdue University on a project using a particle accelerator to measure the cross sections of certain nuclear reactions. According to an account of theirs from a couple years later, they were dining in a cafeteria in December 1942 and discussing their work. They "lamented" that there was no name for the unit of cross section and challenged themselves to develop one. They initially tried to find the name of "some great man closely associated with the field" that they could name the unit after, but struggled to find one that was appropriate. They considered "Oppenheimer" too long (in retrospect, they considered an "Oppy" to perhaps have been allowable), and considered "Bethe" to be too easily confused with the commonly-used Greek letter beta. They then considered naming it after John Manley, another scientist associated with their work, but considered "Manley" too long and "John" too closely associated with toilets. But this latter association, combined with the "rural background" of one of the scientists, suggested to them the term "barn", which also worked because the unit was "really as big as a barn." According to the authors, the first published use of the term was in a (secret) Los Alamos report from late June 1943, on which the two originators were co-authors.
Commonly used prefixed versions
The unit symbol for the barn (b) is also the IEEE standard symbol for bit. In other words, 1 Mb can mean one megabarn or one megabit.
Conversions
Calculated cross sections are often given in terms of inverse squared gigaelectronvolts (GeV−2), via the conversion ħ2c2/GeV2 = = .
In natural units (where ħ = c = 1), this simplifies to GeV−2 = = .
SI units with prefix
In SI, one can use units such as square femtometers (fm2). The most common SI prefixed unit for the barn is the femtobarn, which is equal to a tenth of a square zeptometer. Many scientific papers discussing high-energy physics mention quantities of fractions of femtobarn level.
Inverse femtobarn
The inverse femtobarn (fb−1) is the unit typically used to measure the number of particle collision events per femtobarn of target cross-section, and is the conventional unit for time-integrated luminosity. Thus if a detector has accumulated of integrated luminosity, one expects to find 100 events per femtobarn of cross-section within these data.
Consider a particle accelerator where two streams of particles, with cross-sectional areas measured in femtobarns, are directed to collide over a period of time. The total number of collisions will be directly proportional to the luminosity of the collisions measured over this time. Therefore, the collision count can be calculated by multiplying the integrated luminosity by the sum of the cross-section for those collision processes. This count is then expressed as inverse femtobarns for the time period (e.g., 100 fb−1 in nine months). Inverse femtobarns are often quoted as an indication of particle collider productivity.
Fermilab produced in the first decade of the 21st century. Fermilab's Tevatron took about 4 years to reach in 2005, while two of CERN's LHC experiments, ATLAS and CMS, reached over of proton–proton data in 2011 alone. In April 2012 the LHC achieved the collision energy of with a luminosity peak of 6760 inverse microbarns per second; by May 2012 the LHC delivered 1 inverse femtobarn of data per week to each detector collaboration. A record of over 23 fb−1 was achieved during 2012. As of November 2016, the LHC had achieved over that year, significantly exceeding the stated goal of . In total, the second run of the LHC has delivered around to both ATLAS and CMS in 2015–2018.
Usage example
As a simplified example, if a beamline runs for 8 hours (28 800 seconds) at an instantaneous luminosity of , then it will gather data totaling an integrated luminosity of = = during this period. If this is multiplied by the cross-section, then a dimensionless number is obtained equal to the number of expected scattering events.
| Physical sciences | Area | Basics and measurement |
71478 | https://en.wikipedia.org/wiki/Exotic%20matter | Exotic matter | There are several proposed types of exotic matter:
Hypothetical particles and states of matter that have not yet been encountered, but whose properties would be within the realm of mainstream physics if found to exist.
Several particles whose existence has been experimentally confirmed that are conjectured to be exotic hadrons and within the Standard Model.
States of matter that are not commonly encountered, such as Bose–Einstein condensates, fermionic condensates, nuclear matter, quantum spin liquid, string-net liquid, supercritical fluid, color-glass condensate, quark–gluon plasma, Rydberg matter, Rydberg polaron, photonic matter, Wigner crystal, Superfluid and time crystal but whose properties are entirely within the realm of mainstream physics.
Forms of matter that are poorly understood, such as dark matter and mirror matter.
Ordinary matter that when placed under high pressure, may result in dramatic changes in its physical or chemical properties.
Degenerate matter
Exotic atoms
Negative mass
Negative mass would possess some strange properties, such as accelerating in the direction opposite of applied force. Despite being inconsistent with the expected behavior of "normal" matter, negative mass is mathematically consistent and introduces no violation of conservation of momentum or energy. It is used in certain speculative theories, such as on the construction of artificial wormholes and the Alcubierre drive. The closest known real representative of such exotic matter is the region of pseudo-negative-pressure density produced by the Casimir effect.
Complex mass
A hypothetical particle with complex rest mass would always travel faster than the speed of light. Such particles are called tachyons. There is no confirmed existence of tachyons.
If the rest mass is complex this implies that the denominator is complex because the total energy is observable and thus must be real. Therefore, the quantity under the square root must be negative, which can only happen if v is greater than c. As noted by Gregory Benford et al., special relativity implies that tachyons, if they existed, could be used to communicate backwards in time (see tachyonic antitelephone). Because time travel is considered to be non-physical, tachyons are believed by physicists either not to exist, or else to be incapable of interacting with normal matter.
In quantum field theory, complex mass would induce tachyon condensation.
Materials at high pressure
At high pressure, materials such as sodium chloride (NaCl) in the presence of an excess of either chlorine or sodium were transformed into compounds "forbidden" by classical chemistry, such as and . Quantum mechanical calculations predict the possibility of other compounds, such as , and . The materials are thermodynamically stable at high pressures. Such compounds may exist in natural environments that exist at high pressure, such as the deep ocean or inside planetary cores. The materials have potentially useful properties. For instance, is a two-dimensional metal, made of layers of pure sodium and salt that can conduct electricity. The salt layers act as insulators while the sodium layers act as conductors.
| Physical sciences | Physics basics: General | Physics |
71491 | https://en.wikipedia.org/wiki/Hepatitis%20C | Hepatitis C | Hepatitis C is an infectious disease caused by the hepatitis C virus (HCV) that primarily affects the liver; it is a type of viral hepatitis. During the initial infection period, people often have mild or no symptoms. Early symptoms can include fever, dark urine, abdominal pain, and yellow tinged skin. The virus persists in the liver, becoming chronic, in about 70% of those initially infected. Early on, chronic infection typically has no symptoms. Over many years however, it often leads to liver disease and occasionally cirrhosis. In some cases, those with cirrhosis will develop serious complications such as liver failure, liver cancer, or dilated blood vessels in the esophagus and stomach.
HCV is spread primarily by blood-to-blood contact associated with injection drug use, poorly sterilized medical equipment, needlestick injuries in healthcare, and transfusions. In regions where blood screening has been implemented, the risk of contracting HCV from a transfusion has dropped substantially to less than one per two million. HCV may also be spread from an infected mother to her baby during birth. It is not spread through breast milk, food, water, or casual contact such as hugging, kissing, and sharing food or drinks with an infected person. It is one of five known hepatitis viruses: A, B, C, D, and E.
Diagnosis is by blood testing to look for either antibodies to the virus or viral RNA. In the United States, screening for HCV infection is recommended in all adults age 18 to 79 years old.
There is no vaccine against hepatitis C. Prevention includes harm reduction efforts among people who inject drugs, testing donated blood, and treatment of people with chronic infection. Chronic infection can be cured more than 95% of the time with antiviral medications such as sofosbuvir or simeprevir. Peginterferon and ribavirin were earlier generation treatments that proved successful in <50% of cases and caused greater side effects. While access to the newer treatments was expensive, by 2022 prices had dropped dramatically in many countries (primarily low-income and lower-middle-income countries) due to the introduction of generic versions of medicines. Those who develop cirrhosis or liver cancer may require a liver transplant. Hepatitis C is one of the leading reasons for liver transplantation. However, the virus usually recurs after transplantation.
An estimated 58 million people worldwide were infected with hepatitis C in 2019. Approximately 290,000 deaths from the virus, mainly from liver cancer and cirrhosis attributed to hepatitis C, also occurred in 2019. The existence of hepatitis C – originally identifiable only as a type of non-A non-B hepatitis – was suggested in the 1970s and proven in 1989. Hepatitis C infects only humans and chimpanzees.
Signs and symptoms
Acute infection
Acute symptoms develop in some 20% of those infected. When this occurs, it is generally 4–12 weeks following infection (but it may take from 2 weeks to 6 months for acute symptoms to appear).
Symptoms are generally mild and vague, and may include fatigue, nausea and vomiting, fever, muscle or joint pains, abdominal pain, decreased appetite and weight loss, jaundice (occurs in ~25% of those infected), dark urine, and clay-coloured stools. Acute liver failure due to acute hepatitis C is exceedingly rare. Symptoms and laboratory findings suggestive of liver disease should prompt further tests and can thus help establish a diagnosis of hepatitis C infection early on.
Following the acute phase, the infection may resolve spontaneously in 10–50% of affected people; this occurs more frequently in young people and females.
Chronic infection
About 70% of those exposed to the virus develop a chronic infection. This is defined as the presence of detectable viral replication for at least six months. Though most experience minimal or no symptoms during the initial few decades of a chronic infection, chronic can be associated with fatigue and mild cognitive problems. After several years, chronic infection may cause cirrhosis or liver cancer. The liver enzymes measured from blood samples are normal in 7–53%. (Elevated levels indicate the virus or other disease is damaging liver cells). Late relapses after apparent cure have been reported, but these can be difficult to distinguish from reinfection.
Fatty changes to the liver occur in about half of those infected and are usually present before cirrhosis develops. Usually (80% of the time) this change affects less than a third of the liver. Worldwide hepatitis C is the cause of 27% of cirrhosis cases and 25% of hepatocellular carcinoma. About 10–30% of those infected develop cirrhosis over 30 years. Cirrhosis is more common in those also infected with hepatitis B, schistosoma, or HIV, in alcoholics, and in those of male sex. In those with hepatitis C, excess alcohol increases the risk of developing cirrhosis 5-fold. Those who develop cirrhosis have a 20-fold greater risk of hepatocellular carcinoma. This transformation occurs at a rate of 1–3% per year. Being infected with hepatitis B in addition to hepatitis C increases this risk further.
Liver cirrhosis may lead to portal hypertension, ascites (accumulation of fluid in the abdomen), easy bruising or bleeding, varices (enlarged veins, especially in the stomach and esophagus), jaundice, and a syndrome of cognitive impairment known as hepatic encephalopathy. Ascites occurs at some stage in more than half of those who have a chronic infection.
Extrahepatic complications
The most common problem due to but not involving the liver is mixed cryoglobulinemia (usually the type II form) – an inflammation of small and medium-sized blood vessels. is also associated with autoimmune disorders such as Sjögren's syndrome, lichen planus, a low platelet count, porphyria cutanea tarda, necrolytic acral erythema, insulin resistance, diabetes mellitus, diabetic nephropathy, autoimmune thyroiditis, and B-cell lymphoproliferative disorders. 20–30% of people infected have rheumatoid factor – a type of antibody. Possible associations include Hyde's prurigo nodularis and membranoproliferative glomerulonephritis. Cardiomyopathy with associated abnormal heart rhythms has also been reported. A variety of central and peripheral nervous system disorders has been reported. Chronic infection seems to be associated with an increased risk of pancreatic cancer. People may experience other issues in the mouth such as dryness, salivary duct stones, and crusted lesions around the mouth.
Occult infection
Persons who have been infected with hepatitis C may appear to clear the virus but remain infected. The virus is not detectable with conventional testing but can be found with ultra-sensitive tests. The original method of detection was by demonstrating the viral genome within liver biopsies. Still, newer methods include an antibody test for the virus' core protein and the detection of the viral genome after first concentrating the viral particles by ultracentrifugation. A form of infection with persistently moderately elevated serum liver enzymes but without antibodies to hepatitis C has also been reported. This form is known as cryptogenic occult infection.
Several clinical pictures have been associated with this type of infection. It may be found in people with anti-hepatitis-C antibodies but with normal serum levels of liver enzymes; in antibody-negative people with ongoing elevated liver enzymes of unknown cause; in healthy populations without evidence of liver disease; and in groups at risk for HCV infection including those on hemodialysis or family members of people with occult HCV. The clinical relevance of this form of infection is under investigation. The consequences of occult infection appear to be less severe than with chronic infection but can vary from minimal to hepatocellular carcinoma.
The rate of occult infection in those apparently cured is controversial but appears to be low. 40% of those with hepatitis but with both negative hepatitis C serology and the absence of detectable viral genome in the serum have hepatitis C virus in the liver on biopsy. How commonly this occurs in children is unknown.
Virology
The hepatitis C virus (HCV) is a small, enveloped, single-stranded, positive-sense RNA virus. It is a member of the genus Hepacivirus in the family Flaviviridae. There are seven major genotypes of HCV, which are known as genotypes one to seven. The genotypes are divided into several subtypes with the number of subtypes depending on the genotype. In the United States, about 70% of cases are caused by genotype 1, 20% by genotype 2, and about 1% by each of the other genotypes. Genotype 1 is also the most common in South America and Europe.
The half-life of the virus particles in the serum is around 3 hours and may be as short as 45 minutes. In an infected person, about 1012 virus particles are produced each day. In addition to replicating in the liver the virus can multiply in lymphocytes.
Transmission
Percutaneous contact with contaminated blood is responsible for most infections; however, the method of transmission is strongly dependent on both geographic region and economic status. Indeed, the primary route of transmission in the developed world is injection drug use, while in the developing world the main methods are blood transfusions and unsafe medical procedures. The cause of transmission remains unknown in 20% of cases; however, many of these are believed to be accounted for by injection drug use.
Body modification
Tattooing is associated with two- to threefold increased risk of . This could be due to improperly sterilized equipment or contamination of the dyes used. Tattoos or piercings performed either before the mid-1980s, "underground", or nonprofessionally are of particular concern, since sterile techniques in such settings may be lacking. The risk also appears to be greater for larger tattoos. It is estimated that nearly half of prison inmates share unsterilized tattooing equipment. It is rare for tattoos in a licensed facility to be directly associated with HCV infection.
Drug use
Injection drug use (IDU) is a major risk factor for in many parts of the world. Of 77 countries reviewed, 25 (including the United States) were found to have a prevalence of of 60–80% among people who use injection drugs. Twelve countries had rates greater than 80%. It is believed that ten million people who use intravenous drug are infected with ; China (1.6 million), the United States (1.5 million), and Russia (1.3 million) have the highest absolute totals. Occurrence of among prison inmates in the United States is 10 to 20 times that of the occurrence observed in the general population; this has been attributed to high-risk behavior in prisons such as IDU and tattooing with non-sterile equipment. Shared intranasal drug use may also be a risk factor.
Fomites
A fomite () or fomes () is any inanimate object that, when contaminated with or exposed to infectious agents (such as pathogenic bacteria, viruses or fungi), can transfer disease to a new host.
Personal-care items such as razors, toothbrushes, and manicuring or pedicuring equipment can be contaminated with blood. Sharing such items can potentially lead to exposure to HCV. Appropriate caution should be taken regarding any medical condition that results in bleeding, such as cuts and sores. HCV is not spread through casual contact, such as hugging, kissing, or sharing eating or cooking utensils, nor is it transmitted through food or water.
Healthcare exposure
Blood transfusion, transfusion of blood products, or organ transplants without HCV screening carry significant risks of infection. The United States instituted universal screening in 1992, and Canada instituted universal screening in 1990. This decreased the risk from one in 200 units to between one in 10,000 to one in 10,000,000 per unit of blood. This low risk remains as there is a period of about 11–70 days between the potential blood donor's acquiring and the blood's testing positive depending on the method. Some countries do not screen for due to the cost.
Those who have experienced a needle stick injury from someone who was HCV positive have about a 1.8% chance of subsequently contracting the disease themselves. The risk is greater if the needle is hollow and the puncture wound is deep. There is a risk from mucosal exposure to blood, but this risk is low, and there is no risk if blood exposure occurs on intact skin.
Hospital equipment has also been documented as a method of transmission of , including the reuse of needles and syringes, multiple-use medication vials, infusion bags, and improperly sterilized surgical equipment, among others. Limitations in the implementation and enforcement of stringent standard precautions in public and private medical and dental facilities are known to have been the primary cause of the spread of HCV in Egypt, the country that had the highest rate of infection in the world in 2012, and currently
Egypt becomes the first country to achieve WHO validation on the path to elimination of hepatitis C.
For more, see HONOReform (Hepatitis Outbreaks National Organization for Reform).
Mother-to-child transmission
Mother-to-child transmission of occurs in fewer than 10% of pregnancies. There are no measures that alter this risk. It is not clear when transmission occurs during pregnancy, but it may occur both during gestation and at delivery. A long labor is associated with a greater risk of transmission. There is no evidence that breastfeeding spreads HCV; however, to be cautious, an infected mother is advised to avoid breastfeeding if her nipples are cracked and bleeding, or if her viral loads are high.
Sexual intercourse
Sexual transmission of hepatitis C is uncommon. Studies examining the risk of HCV transmission between heterosexual partners, when one is infected and the other is not, have found very low risks. Sexual practices that involve higher levels of trauma to the anogenital mucosa, such as anal penetrative sex, or that occur when there is a concurrent sexually transmitted infection, including HIV or genital ulceration, present greater risks. The United States Department of Veterans Affairs recommends condom use to prevent transmission in those with multiple partners, but not those in relationships that involve only a single partner.
Diagnosis
There are several diagnostic tests for , including HCV antibody enzyme immunoassay (ELISA), recombinant immunoblot assay, and quantitative HCV RNA polymerase chain reaction (PCR). HCV RNA can be detected by PCR typically one to two weeks after infection. In contrast, antibodies can take substantially longer to form and thus be detected.
Diagnosing patients is generally a challenge as patients with acute illness often present with mild, non-specific flu-like symptoms, while the transition from acute to chronic is sub-clinical. Chronic is defined as infection with the virus persisting for more than six months based on the presence of its RNA. Chronic infections are typically asymptomatic during the first few decades, and thus are most commonly discovered following the investigation of elevated liver enzyme levels or during a routine screening of high-risk individuals. Testing is not able to distinguish between acute and chronic infections. Diagnosis in infants is difficult as maternal antibodies may persist for up to 18 months.
Serology
testing typically begins with blood testing to detect the presence of antibodies to the HCV, using an enzyme immunoassay. If this test is positive, a confirmatory test is then performed to verify the immunoassay and to determine the viral load. A recombinant immunoblot assay is used to verify the immunoassay and the viral load is determined by an HCV RNA polymerase chain reaction. If there is no RNA and the immunoblot is positive, it means that the person tested had a previous infection but cleared it either with treatment or spontaneously; if the immunoblot is negative, it means that the immunoassay was wrong. It takes about 6–8 weeks following infection before the immunoassay will test positive. Several tests are available as point-of-care testing (POCT), which can provide results within 30 minutes.
Liver enzymes are variable during the initial part of the infection and on average begin to rise seven weeks after infection. The elevation of liver enzymes does not closely follow disease severity.
Biopsy
Liver biopsies are used to determine the degree of liver damage present; however, there are risks from the procedure. The typical changes seen are lymphocytes within the parenchyma, lymphoid follicles in portal triad, and changes to the bile ducts. There are many blood tests available that try to determine the degree of hepatic fibrosis and alleviate the need for biopsy.
Screening
It is believed that only 5–50% of those infected in the United States and Canada are aware of their status. Routine screening for those between the ages of 18 and 79 was recommended by the United States Preventive Services Task Force in 2020. Previously, testing was recommended for those at high risk, including injection drug users, those who have received blood transfusions before 1992, those who have been incarcerated, those on long-term hemodialysis, and those with tattoos. Screening is also recommended for those with elevated liver enzymes, as this is frequently the only sign of chronic hepatitis. , the U.S. Centers for Disease Control and Prevention (CDC) recommends a single screening test for those born between 1945 and 1965. In Canada, a one-time screening is recommended for those born between 1945 and 1975.
Prevention
As of 2022, no approved vaccine protects against contracting . A combination of harm reduction strategies, such as the provision of new needles and syringes and treatment of substance use, decreases the risk of in people using injection drugs by about 75%. The screening of blood donors is important at a national level, as is adhering to universal precautions within healthcare facilities. In countries where there is an insufficient supply of sterile syringes, medications should be given orally rather than via injection (when possible). Recent research also suggests that treating people with active infection, thereby reducing the potential for transmission, may be an effective preventive measure.
Hepatitis C vaccine phase 1 clinical trials are set to begin in the summer of 2023.
Treatment
Those with chronic are advised to avoid alcohol and medications that are toxic to the liver. They should also be vaccinated against hepatitis A and hepatitis B due to the increased risk if also infected. Use of acetaminophen is generally considered safe at reduced doses. Nonsteroidal anti-inflammatory drugs (NSAIDs) are not recommended in those with advanced liver disease due to an increased risk of bleeding. Ultrasound surveillance for hepatocellular carcinoma is recommended in those with accompanying cirrhosis. Coffee consumption has been associated with a slower rate of liver scarring in those infected with HCV.
Medications
More than 95% of chronic cases are resolved with treatment. Treatment with antiviral medication is recommended for all people with proven chronic hepatitis C who are not at high risk of death from other causes. People with the highest complication risk, based on the degree of liver scarring, should be treated first. The initial recommended treatment depends on the type of hepatitis C virus, if the person has received previous hepatitis C treatment, and whether the person has cirrhosis. Direct-acting antivirals are the preferred treatment and have been validated by testing for virus particles in patients' blood.
No prior treatment
HCV genotype 1a (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or ledipasvir/sofosbuvir (the latter for people who do not have HIV/AIDS, are not African-American, and have less than 6 million HCV viral copies per milliliter of blood) or 12 weeks of elbasvir/grazoprevir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. Sofosbuvir with either daclatasvir or simeprevir may also be used.
HCV genotype 1a (with compensated cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of elbasvir/grazoprevir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. An alternative treatment regimen of elbasvir/grazoprevir with weight-based ribavirin for 16 weeks can be used if the HCV is found to have antiviral resistance mutations against NS5A protease inhibitors.
HCV genotype 1b (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or ledipasvir/sofosbuvir (with the aforementioned limitations for the latter as above) or 12 weeks of elbasvir/grazoprevir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. Alternative regimens include 12 weeks of ombitasvir/paritaprevir/ritonavir with dasabuvir or 12 weeks of sofosbuvir with either daclatasvir or simeprevir.
HCV genotype 1b (with compensated cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of elbasvir/grazoprevir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. A 12-week course of paritaprevir/ritonavir/ombitasvir with dasabuvir may also be used.
HCV genotype 2 (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir. Alternatively, 12 weeks of sofosbuvir/daclatasvir can be used.
HCV genotype 2 (with compensated cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir. An alternative regimen of sofosbuvir/daclatasvir can be used for 16–24 weeks.
HCV genotype 3 (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir or sofosbuvir and daclatasvir.
HCV genotype 3 (with compensated cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir, or if certain antiviral mutations are present 12 weeks of sofosbuvir/velpatasvir/voxilaprevir (when certain antiviral mutations are present), or 24 weeks of sofosbuvir and daclatasvir.
HCV genotype 4 (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir, elbasvir/grazoprevir, or ledipasvir/sofosbuvir. A 12-week ombitasvir/paritaprevir/ritonavir regimen is also acceptable in combination with weight-based ribavirin.
HCV genotype 4 (with compensated cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir, elbasvir/grazoprevir, or ledipasvir/sofosbuvir is recommended. A 12-week course of ombitasvir/paritaprevir/ritonavir with weight-based ribavirin is an acceptable alternative.
HCV genotype 5 or 6 (with or without compensated cirrhosis): 8 weeks of glecaprevir/pibrentasvir is recommended. If cirrhosis is present, then a 12-week course of sofosbuvir/velpatasvir, or ledipasvir/sofosbuvir is an alternative option.
More than 95% of people with chronic infection can be cured when treated with medications; this could be expensive, but by 2022 prices had dropped dramatically. The combination of sofosbuvir, velpatasvir, and voxilaprevir may be used in those who have previously been treated with sofosbuvir or other drugs that inhibit NS5A and were not cured.
Before 2011, treatments consisted of a combination of pegylated interferon alpha and ribavirin for a period of 24 or 48 weeks, depending on HCV genotype. This treatment produces cure rates of 70–80% for genotype 2 and 3, respectively, and 45–70% for genotypes 1 and 4. Adverse effects with these treatments were common, with 50–60% of those being treated experiencing flu-like symptoms and nearly a third experiencing depression or other emotional issues. Treatment during the first six months of infection (the acute stage) is more effective than when has entered the chronic stage. In those with chronic hepatitis B, treatment for hepatitis C results in reactivation of hepatitis B about 25% of the time.
Surgery
Cirrhosis due to hepatitis C is a common reason for liver transplantation, though the virus usually (80–90% of cases) recurs afterwards. Infection of the graft leads to 10–30% of people developing cirrhosis within five years. Treatment with pegylated interferon and ribavirin post-transplant decreases the risk of recurrence to 70%. A 2013 review found no clear evidence as to whether antiviral medication is useful if the graft became reinfected.
Alternative medicine
Several alternative therapies are claimed by their proponents to be helpful for , including milk thistle, ginseng, and colloidal silver. However, no alternative therapy has been shown to improve outcomes for patients, and no evidence exists that alternative therapies have any effect on the virus.
Prognosis
The response to treatment is measured by sustained viral response (SVR), defined as the absence of detectable RNA of the hepatitis C virus in blood serum for at least 24 weeks after discontinuing treatment, and rapid virological response (RVR), defined as undetectable levels achieved within four weeks of treatment. Successful treatment decreases the future risk of hepatocellular carcinoma by 75%.
Before 2012, sustained response occurred in about 40–50% of those with HCV genotype 1 who received 48 weeks of treatment. A sustained response was seen in 70–80% of people with HCV genotypes 2 and 3 following 24 weeks of treatment. A sustained response occurs for about 65% of those with genotype 4 after 48 weeks of treatment. For those with HCV genotype 6, a 48-week treatment protocol of pegylated interferon and ribavirin results in a higher rate of sustained responses than for genotype 1 (86% vs. 52%). Further studies are needed to determine results for shorter 24-week treatments and those given at lower dosages.
Spontaneous resolution
Around 30% (15–45%) of those with acute HCV infections will spontaneously clear the virus within six months before the infection is considered chronic. Spontaneous resolution following acute infection appears more common in females and younger patients and may be influenced by certain genetic factors. Chronic HCV infection may also resolve spontaneously months or years after the acute phase has passed, though this is unusual.
Epidemiology
The World Health Organization estimated in a 2021 report that 58 million people globally were living with chronic hepatitis C as of 2019. About 1.5 million people are infected per year, and about 290,000 people die yearly from hepatitis C–related diseases, mainly from liver cancer and cirrhosis.
Hepatitis C infection rates increased substantially in the 20th century due to a combination of intravenous drug abuse and the reuse of poorly sterilized medical equipment. However, advancements in treatment have led to notable declines in chronic infections and deaths from the virus. As a result, the number of chronic patients receiving treatment worldwide has grown from about 950,000 in 2015 to 9.4 million in 2019. During the same period, hepatitis C deaths declined from about 400,000 to 290,000.
Previously, a 2013 study found high infection rates (>3.5% population infected) in Central and East Asia, North Africa and the Middle East, intermediate infection rates (1.5–3.5%) in South and Southeast Asia, sub-Saharan Africa, Andean, Central and Southern Latin America, Caribbean, Oceania, Australasia and Central, Eastern and Western Europe; and low infection rates (<1.5%) in Asia-Pacific, Tropical Latin America and North America.
Among those chronically infected, the risk of cirrhosis after 20 years varies between studies but has been estimated at ~10–15% for men and ~1–5% for women. The reason for this difference is not known. Once cirrhosis is established, the rate of developing hepatocellular carcinoma is ~1–4% per year. Rates of new infections have decreased in the Western world since the 1990s due to improved screening of blood before transfusion.
In Egypt, following Egypt's 2030 Vision, the country managed to bring down the infection rates of hepatitis C from 22% in 2011 to just 2% in 2021. It was believed that the high prevalence in Egypt was linked to a discontinued mass-treatment campaign for schistosomiasis, using improperly sterilized glass syringes.
In the United States, about 2% of people have chronic . In 2014, an estimated 30,500 new acute hepatitis C cases occurred (0.7 per 100,000 population), an increase from 2010 to 2012. The number of deaths from hepatitis C rose to 15,800 in 2008 having overtaken HIV/AIDS as a cause of death in the US in 2007. In 2014, it was the single greatest cause of infectious death in the United States. This mortality rate is expected to increase, as those infected by transfusion before HCV testing become apparent. In Europe, the percentage of people with chronic infections has been estimated to be between 0.13 and 3.26%.
In the United Kingdom, about 118,000 people were chronically infected in 2019. About half of people using a needle exchange in London in 2017–18 tested positive for hepatitis C of which half were unaware that they had it. As part of a bid to eradicate hepatitis C by 2025, NHS England conducted a large procurement exercise in 2019. Merck Sharp & Dohme, Gilead Sciences, and Abbvie were awarded contracts, which, together, are worth up to £1 billion over five years.
The total number of people with this infection is higher in some countries in Africa and Asia. Countries with particularly high rates of infection include Pakistan (4.8%) and China (3.2%).
Since 2014, extremely effective treatments have been available to eradicate the disease within 8–12 weeks in most people. In 2015, about 950,000 people were treated while 1.7 million new infections occurred, meaning that overall the number of people with HCV increased. These numbers differ by country and improved in 2016, with some countries achieving higher cure rates than new infection rates (mostly high-income countries). By 2018, twelve countries are on track to achieve HCV elimination. While antiviral agents will curb new infections, it is less clear whether they impact overall deaths and morbidity. Furthermore, for them to be effective, people need to be aware of their infection – it is estimated that worldwide only 20% of infected people are aware of their infection (in the US, fewer than half were aware).
History
In the mid-1970s, Harvey J. Alter, Chief of the Infectious Disease Section in the Department of Transfusion Medicine at the National Institutes of Health, and his research team demonstrated how most post-transfusion hepatitis cases were not due to hepatitis A or B viruses. Despite this discovery, international research efforts to identify the virus, initially called non-A, non-B hepatitis (NANBH), failed for the next decade. In 1987, Michael Houghton, Qui-Lim Choo, and George Kuo at Chiron Corporation, collaborating with Daniel W. Bradley at the Centers for Disease Control and Prevention, used a novel molecular cloning approach to identify the unknown organism and develop a diagnostic test. In 1988, Alter confirmed the virus by verifying its presence in a panel of NANBH specimens, and Chiron announced its discovery at a Washington, D.C. press conference in May 1988.
At the time, Chiron was in talks with the Japanese health ministry to sell a biotech version of the hepatitis B vaccine. Simultaneously, Emperor Hirohito had developed cancer and required numerous blood transfusions. The Japanese health ministry placed a screening order for Chiron's experimental NANBH test. Chiron's Japanese marketing subsidiary, Diagnostic Systems KK, introduced the term "Hepatitis C" in November 1988 in Tokyo news reports publicizing the testing of the emperor's blood. Chiron sold a screening order to the Japanese health ministry in November 1988, earning the company US$60 million a year. However, because Chiron had not published any of its research and did not make a culture model available to other researchers to verify Chiron's discovery, hepatitis C earned the nickname "The Emperor's New Virus."
In April 1989, the "discovery" of HCV was published in two articles in the journal Science. Chiron filed for several patents on the virus and its diagnosis. A competing patent application by the CDC was dropped in 1990 after Chiron paid $1.9 million to the CDC and $337,500 to Bradley. In 1994, Bradley sued Chiron, seeking to invalidate the patent, have himself included as a co-inventor, and receive damages and royalty income. The court ruled against him, which was sustained on appeal in 1998.
Because of the unique molecular "isolation" of the hepatitis C virus, although Houghton and Kuo's team at Chiron had discovered strong biochemical markers for the virus and the test proved effective at reducing cases of post-transfusion hepatitis, the existence of a hepatitis C virus was essentially inferred. In 1992, the San Francisco Chronicle reported that the virus had never been observed under an electron microscope. In 1997, the American FDA approved the first hepatitis C drug on the basis of a surrogate marker called "Sustained Virological Response." In response, the pharmaceutical industry established a nationwide network of "Astro-Turf" patient advocacy groups to raise awareness (and fear) of the disease.
Hepatitis C was finally "discovered" in 2005 when a Japanese team was able to propagate a molecular clone in a cell culture called Huh7. This discovery enabled proper characterization of the viral particle and rapid research into the development of protease inhibitors replacing early interferon treatments. The first of these, sofosbuvir, was approved on December 6, 2013. These drugs are marketed as "cures;" however, because they were approved on the basis of surrogate markers and not clinical endpoints such as prolonging life or improving liver health, many experts question their value.
After blood screening began, a notable hepatitis C prevalence was discovered in Egypt, which claimed six million individuals were infected by unsterile needles in a late 1970s mass chemotherapy campaign to eliminate schistosomiasis (snail fever).
On October 5, 2020, Houghton and Alter, together with Charles M. Rice, were awarded the Nobel Prize in Physiology or Medicine for their work.
Society and culture
World Hepatitis Day, held on July 28, is coordinated by the World Hepatitis Alliance. The economic costs of hepatitis C are significant both to the individual and to society. In the United States, the average lifetime cost of the disease was estimated at US$33,407 in 2003 with the cost of a liver transplant costing approximately US$200,000. In Canada, the cost of a course of antiviral treatment is as high as 30,000 CAD in 2003, while the United States costs are between 9,200 and US$17,600 in 1998. In many areas of the world, people cannot afford treatment with antivirals as they either lack insurance coverage or their insurance will not pay for antivirals. In the English National Health Service treatment rates for hepatitis C were higher among less deprived groups in 2010–2012.
Hepatitis C–infected Spanish anaesthetist Juan Maeso was jailed for the maximum possible period of 20 years for infecting 275 patients between 1988 and 1997, as he used the same needles to give both himself and the patients opioids.
Special populations
Children and pregnancy
Compared with adults, infection in children is much less understood. Worldwide the prevalence of virus infection in pregnant women and children has been estimated to be 1–8% and 0.05–5% respectively. The vertical transmission rate has been estimated to be 3–5% and there is a high rate of spontaneous clearance (25–50%) in the children. Higher rates have been reported for both vertical transmission (18%, 6–36%, and 41%) and prevalence in children (15%).
In developed countries, transmission around the time of birth is now the leading cause of HCV infection. In the absence of the Hepatitis C virus in the mother's blood, transmission is rare. Factors associated with an increased rate of infection include membrane rupture of longer than 6 hours before delivery and procedures exposing the infant to maternal blood. Cesarean sections are not recommended. Breastfeeding is considered safe if the nipples are not damaged. Infection around the time of birth in one child does not increase the risk in a subsequent pregnancy. All genotypes appear to have the same risk of transmission.
HCV infection is frequently found in children who have previously been presumed to have non-A, non-B hepatitis, and cryptogenic liver disease. The presentation in childhood may be asymptomatic or with elevated liver function tests. While the infection is commonly asymptomatic, both cirrhosis with liver failure and hepatocellular carcinoma may occur in childhood.
Immunosuppressed
The rate of in immunosuppressed people is higher. This is particularly true in those with human immunodeficiency virus infection, recipients of organ transplants, and those with hypogammaglobulinemia. Infection in these people is associated with an unusually rapid progression to cirrhosis. People with stable HIV who never received medication for HCV may be treated with a combination of peginterferon plus ribavirin with caution to the possible side effects.
Research
, there are about one hundred medications in development for . These include vaccines to treat hepatitis, immunomodulators, and cyclophilin inhibitors, among others. These potential new treatments have come about due to a better understanding of the virus. There are several vaccines under development and some have shown encouraging results.
The combination of sofosbuvir and velpatasvir in one trial (reported in 2015) resulted in cure rates of 99%. More studies are needed to investigate the role of the preventive antiviral medication against HCV recurrence after transplantation.
Animal models
One barrier to finding treatments for is the lack of a suitable animal model. Despite moderate success, research highlights the need for pre-clinical testing in mammalian systems such as mouse, particularly to develop vaccines in poorer communities. Chimpanzees remain the only living system to study, yet their use has ethical concerns and regulatory restrictions. While scientists have used human cell culture systems such as hepatocytes, questions have been raised about their accuracy in reflecting the body's response to infection.
One aspect of hepatitis research is to reproduce infections in mammalian models. A strategy is to introduce liver tissues from humans into mice, a technique known as xenotransplantation. This is done by generating chimeric mice and exposing the mice to HCV infection. This engineering process is known to create humanized mice and provide opportunities to study hepatitis C within the 3D architectural design of the liver and evaluate antiviral compounds. Alternatively, generating inbred mice with susceptibility to HCV would simplify the process of studying mouse models.
| Biology and health sciences | Viral diseases | Health |
71734 | https://en.wikipedia.org/wiki/Polygraph | Polygraph | A polygraph, often incorrectly referred to as a lie detector test, is a pseudoscientific device or procedure that measures and records several physiological indicators such as blood pressure, pulse, respiration, and skin conductivity while a person is asked and answers a series of questions. The belief underpinning the use of the polygraph is that deceptive answers will produce physiological responses that can be differentiated from those associated with non-deceptive answers; however, there are no specific physiological reactions associated with lying, making it difficult to identify factors that separate those who are lying from those who are telling the truth.
In some countries, polygraphs are used as an interrogation tool with criminal suspects or candidates for sensitive public or private sector employment. Some United States law enforcement and federal government agencies, as well as many police departments, use polygraph examinations to interrogate suspects and screen new employees. Within the US federal government, a polygraph examination is also referred to as a psychophysiological detection of deception examination.
Assessments of polygraphy by scientific and government bodies generally suggest that polygraphs are highly inaccurate, may easily be defeated by countermeasures, and are an imperfect or invalid means of assessing truthfulness. A comprehensive 2003 review by the National Academy of Sciences of existing research concluded that there was "little basis for the expectation that a polygraph test could have extremely high accuracy." The American Psychological Association states that "most psychologists agree that there is little evidence that polygraph tests can accurately detect lies."
Testing procedure
The examiner typically begins polygraph test sessions with a pre-test interview to gain some preliminary information which will later be used to develop diagnostic questions. Then the tester will explain how the polygraph is supposed to work, emphasizing that it can detect lies and that it is important to answer truthfully. Then a "stim test" is often conducted: the subject is asked to deliberately lie and then the tester reports that he was able to detect this lie. Guilty subjects are likely to become more anxious when they are reminded of the test's validity. However, there are risks of innocent subjects being equally or more anxious than the guilty. Then the actual test starts. Some of the questions asked are "irrelevant" ("Is your name Fred?"), others are "diagnostic" questions, and the remainder are the "relevant questions" that the tester is really interested in. The different types of questions alternate. The test is passed if the physiological responses to the diagnostic questions are larger than those during the relevant questions.
Criticisms have been given regarding the validity of the administration of the Control Question Technique. The CQT may be vulnerable to being conducted in an interrogation-like fashion. This kind of interrogation style would elicit a nervous response from innocent and guilty suspects alike. There are several other ways of administering the questions.
An alternative is the Guilty Knowledge Test (GKT), or the Concealed Information Test, which is used in Japan. The administration of this test is given to prevent potential errors that may arise from the questioning style. The test is usually conducted by a tester with no knowledge of the crime or circumstances in question. The administrator tests the participant on their knowledge of the crime that would not be known to an innocent person. For example: "Was the crime committed with a .45 or a 9 mm?" The questions are in multiple choice and the participant is rated on how they react to the correct answer. If they react strongly to the guilty information, then proponents of the test believe that it is likely that they know facts relevant to the case. This administration is considered more valid by supporters of the test because it contains many safeguards to avoid the risk of the administrator influencing the results.
Effectiveness
Assessments of polygraphy by scientific and government bodies generally suggest that polygraphs are inaccurate, may be defeated by countermeasures, and are an imperfect or invalid means of assessing truthfulness. Despite claims that polygraph tests are between 80% and 90% accurate by advocates, the National Research Council has found no evidence of effectiveness. In particular, studies have indicated that the relevant–irrelevant questioning technique is not ideal, as many innocent subjects exert a heightened physiological reaction to the crime-relevant questions. The American Psychological Association states "Most psychologists agree that there is little evidence that polygraph tests can accurately detect lies."
In 2002, a review by the National Research Council found that, in populations "untrained in countermeasures, specific-incident polygraph tests can discriminate lying from truth telling at rates well above chance, though well below perfection". The review also warns against generalization from these findings to justify the use of polygraphs—"polygraph accuracy for screening purposes is almost certainly lower than what can be achieved by specific-incident polygraph tests in the field"—and notes some examinees may be able to take countermeasures to produce deceptive results.
In the 1998 US Supreme Court case United States v. Scheffer, the majority stated that "There is simply no consensus that polygraph evidence is reliable [...] Unlike other expert witnesses who testify about factual matters outside the jurors' knowledge, such as the analysis of fingerprints, ballistics, or DNA found at a crime scene, a polygraph expert can supply the jury only with another opinion." The Supreme Court summarized their findings by stating that the use of polygraph was "little better than could be obtained by the toss of a coin." In 2005, the 11th Circuit Court of Appeals stated that "polygraphy did not enjoy general acceptance from the scientific community". In 2001, William Iacono, Professor of Psychology and Neuroscience at the University of Minnesota, concluded:
Although the CQT [Control Question Test] may be useful as an investigative aid and tool to induce confessions, it does not pass muster as a scientifically credible test. CQT theory is based on naive, implausible assumptions indicating (a) that it is biased against innocent individuals and (b) that it can be beaten simply by artificially augmenting responses to control questions. Although it is not possible to adequately assess the error rate of the CQT, both of these conclusions are supported by published research findings in the best social science journals (Honts et al., 1994; Horvath, 1977; Kleinmuntz & Szucko, 1984; Patrick & Iacono, 1991). Although defense attorneys often attempt to have the results of friendly CQTs admitted as evidence in court, there is no evidence supporting their validity and ample reason to doubt it. Members of scientific organizations who have the requisite background to evaluate the CQT are overwhelmingly skeptical of the claims made by polygraph proponents.
Polygraphs measure arousal, which can be affected by anxiety, anxiety disorders such as posttraumatic stress disorder (PTSD), nervousness, fear, confusion, hypoglycemia, psychosis, depression, substance induced states (nicotine, stimulants), substance withdrawal state (alcohol withdrawal) or other emotions; polygraphs do not measure "lies". A polygraph cannot differentiate anxiety caused by dishonesty and anxiety caused by something else.
Since the polygraph does not measure lying, the Silent Talker Lie Detector inventors expected that adding a camera to film microexpressions would improve the accuracy of the evaluators. This did not happen in practice according to an article in the Intercept.
US Congress Office of Technology Assessment
In 1983, the US Congress Office of Technology Assessment published a review of the technology and found that
National Academy of Sciences
In 2003, the National Academy of Sciences (NAS) issued a report entitled "The Polygraph and Lie Detection". The NAS found that "overall, the evidence is scanty and scientifically weak", concluding that 57 of the approximately 80 research studies that the American Polygraph Association relied on to reach their conclusions were significantly flawed. These studies did show that specific-incident polygraph testing, in a person untrained in counter-measures, could discern the truth at "a level greater than chance, yet short of perfection". However, due to several flaws, the levels of accuracy shown in these studies "are almost certainly higher than actual polygraph accuracy of specific-incident testing in the field". By adding a camera, the Silent Talker Lie Detector attempted to give more data to the evaluator by providing information about microexpressions. However adding the Silent Talker camera did not improve lie detection and was very expensive and cumbersome to include according to an article in the Intercept.
When polygraphs are used as a screening tool (in national security matters and for law enforcement agencies for example) the level of accuracy drops to such a level that "Its accuracy in distinguishing actual or potential security violators from innocent test takers is insufficient to justify reliance on its use in employee security screening in federal agencies." The NAS concluded that the polygraph "may have some utility but that there is "little basis for the expectation that a polygraph test could have extremely high accuracy".
The NAS conclusions paralleled those of the earlier United States Congress Office of Technology Assessment report "Scientific Validity of Polygraph Testing: A Research Review and Evaluation". Similarly, a report to Congress by the Moynihan Commission on Government Secrecy concluded that "The few Government-sponsored scientific research reports on polygraph validity (as opposed to its utility), especially those focusing on the screening of applicants for employment, indicate that the polygraph is neither scientifically valid nor especially effective beyond its ability to generate admissions".
Despite the NAS finding of a "high rate of false positives," failures to expose individuals such as Aldrich Ames and Larry Wu-Tai Chin, and other inabilities to show a scientific justification for the use of the polygraph, it continues to be employed.
Countermeasures
Several proposed countermeasures designed to pass polygraph tests have been described. There are two major types of countermeasures: "general state" (intending to alter the physiological or psychological state of the subject during the test), and "specific point" (intending to alter the physiological or psychological state of the subject at specific periods during the examination, either to increase or decrease responses during critical examination periods).
General state: asked how he passed the polygraph test, Central Intelligence Agency officer turned KGB mole Aldrich Ames explained that he sought advice from his Soviet handler and received the simple instruction to: "Get a good night's sleep, and rest, and go into the test rested and relaxed. Be nice to the polygraph examiner, develop a rapport, and be cooperative and try to maintain your calm". Additionally, Ames explained, "There's no special magic... Confidence is what does it. Confidence and a friendly relationship with the examiner... rapport, where you smile and you make him think that you like him".
Specific point: other suggestions for countermeasures include for the subject to mentally record the control and relevant questions as the examiner reviews them before the interrogation begins. During the interrogation the subject is supposed to carefully control their breathing while answering the relevant questions, and to try to artificially increase their heart rate during the control questions, for example by thinking of something scary or exciting, or by pricking themselves with a pointed object concealed somewhere on the body. In this way the results will not show a significant reaction to any of the relevant questions.
Use
Law enforcement agencies and intelligence agencies in the United States are by far the biggest users of polygraph technology. In the United States alone most federal law enforcement agencies either employ their own polygraph examiners or use the services of examiners employed in other agencies. In 1978 Richard Helms, the eighth Director of Central Intelligence, stated:
Susan McCarthy of Salon said in 2000 that "The polygraph is an American phenomenon, with limited use in a few countries, such as Canada, Israel and Japan."
Armenia
In Armenia, government administered polygraphs are legal, at least for use in national security investigations. The National Security Service (NSS), Armenia's primary intelligence service, requires polygraph examinations of all new applicants.
Australia
Polygraph evidence is currently inadmissible in New South Wales courts under the Lie Detectors Act 1983. Under the same act, it is also illegal to use polygraphs for the purpose of granting employment, insurance, financial accommodation, and several other purposes for which polygraphs may be used in other jurisdictions.
Canada
In Canada, the 1987 decision of R v Béland, the Supreme Court of Canada rejected the use of polygraph results as evidence in court, finding that they were inadmissible. The polygraph is still used as a tool in the investigation of criminal acts and sometimes employed in the screening of employees for government organizations.
In the province of Ontario, the use of polygraphs by an employer is not permitted. A police force does have the authorization to use a polygraph in the course of the investigation of an offence.
Europe
In a majority of European jurisdictions, polygraphs are generally considered to be unreliable for gathering evidence, and are usually not used by local law enforcement agencies. Polygraph testing is widely seen in Europe to violate the right to remain silent.
In England and Wales a polygraph test can be taken, but the results cannot be used in a court of law to prove a case. However, the Offender Management Act 2007 put in place an option to use polygraph tests to monitor serious sex offenders on parole in England and Wales; these tests became compulsory in 2014 for high risk sexual offenders currently on parole in England and Wales.
The Supreme Court of Poland declared on January 29, 2015, that the use of polygraph in interrogation of suspects is forbidden by the Polish Code of Criminal Procedure. Its use might be allowed though if the suspect has been already accused of a crime and if the interrogated person consents of the use of a polygraph. Even then, the use of polygraph can never be used as a substitute of actual evidence.
As of 2017, the justice ministry and Supreme Court of both of the Netherlands and Germany had rejected use of polygraphs.
According to the 2017 book Psychology and Law: Bridging the Gap by psychologists David Canter and Rita Zukauskiene Belgium was the European country with the most prevalent use of polygraph testing by police, with about 300 polygraphs carried out each year in the course of police investigations. The results are not considered viable evidence in bench trials, but have been used in jury trials.
In Lithuania, "polygraphs have been in use since 1992", with law enforcement utilizing the Event Knowledge Test (a "modification" of the Concealed Information Test) in criminal investigations.
India
In 2008, an Indian court adopted the Brain Electrical Oscillation Signature Profiling test as evidence to convict a woman who was accused of murdering her fiancé. It was the first time that the result of polygraph was used as evidence in court.
On May 5, 2010, The Supreme Court of India declared use of narcoanalysis, brain mapping and polygraph tests on suspects as illegal and against the constitution if consent is not obtained and forced. Article 20(3) of the Indian Constitution states: "No person accused of any offence shall be compelled to be a witness against himself." Polygraph tests are still legal if the defendant requests one.
Israel
The Supreme Court of Israel, in Civil Appeal 551/89 (Menora Insurance v. Jacob Sdovnik), ruled that the polygraph has not been recognized as a reliable device. In other decisions, polygraph results were ruled inadmissible in criminal trials. Polygraph results are only admissible in civil trials if the person being tested agrees to it in advance.
Philippines
The results of polygraph tests are inadmissible in court in the Philippines. The National Bureau of Investigation do use polygraphs in aid of investigation.
United States
In 2018, Wired magazine reported that an estimated 2.5 million polygraph tests were given each year in the United States, with the majority administered to paramedics, police officers, firefighters, and state troopers. The average cost to administer the test is more than $700 and is part of a $2 billion industry.
, polygraph testimony was admitted by stipulation in 19 states, and was subject to the discretion of the trial judge in federal court. The use of polygraph in court testimony remains controversial, although it is used extensively in post-conviction supervision, particularly of sex offenders. In Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993), the old Frye standard was lifted and all forensic evidence, including polygraph, had to meet the new Daubert standard in which "underlying reasoning or methodology is scientifically valid and properly can be applied to the facts at issue." While polygraph tests are commonly used in police investigations in the US, no defendant or witness can be forced to undergo the test unless they are under the supervision of the courts. In United States v. Scheffer (1998), the US Supreme Court left it up to individual jurisdictions whether polygraph results could be admitted as evidence in court cases. Nevertheless, it is used extensively by prosecutors, defense attorneys, and law enforcement agencies. In the states of Rhode Island, Massachusetts, Maryland, New Jersey, Oregon, Delaware and Iowa it is illegal for any employer to order a polygraph either as conditions to gain employment, or if an employee has been suspected of wrongdoing. The Employee Polygraph Protection Act of 1988 (EPPA) generally prevents employers from using lie detector tests, either for pre-employment screening or during the course of employment, with certain exemptions. As of 2013, about 70,000 job applicants are polygraphed by the federal government on an annual basis. In the United States, the State of New Mexico admits polygraph testing in front of juries under certain circumstances.
In 2010 the NSA produced a video explaining its polygraph process. The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the website of the Defense Security Service. Jeff Stein of The Washington Post said that the video portrays "various applicants, or actors playing them—it’s not clear—describing everything bad they had heard about the test, the implication being that none of it is true." AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video. George Maschke, the founder of the website, accused the NSA polygraph video of being "Orwellian".
The polygraph was invented in 1921 by John Augustus Larson, a medical student at the University of California, Berkeley and a police officer of the Berkeley Police Department in Berkeley, California. The polygraph was on the Encyclopædia Britannica 2003 list of greatest inventions, described as inventions that "have had profound effects on human life for better or worse." In 2013, the US federal government had begun indicting individuals who stated that they were teaching methods on how to defeat a polygraph test. During one of those investigations, upwards of 30 federal agencies were involved in investigations of almost 5000 people who had various degrees of contact with those being prosecuted or who had purchased books or DVDs on the topic of beating polygraph tests.
Security clearances
In 1995, Harold James Nicholson, a Central Intelligence Agency (CIA) employee later convicted of spying for Russia, had undergone his periodic five-year reinvestigation, in which he showed a strong probability of deception on questions regarding relationships with a foreign intelligence unit. This polygraph test later led to an investigation which resulted in his eventual arrest and conviction. In most cases, however, polygraphs are more of a tool to "scare straight" those who would consider espionage. Jonathan Pollard was advised by his Israeli handlers that he was to resign his job from American intelligence if he was ever told he was subject to a polygraph test. Likewise, John Anthony Walker was advised by his handlers not to engage in espionage until he had been promoted to the highest position for which a polygraph test was not required, to refuse promotion to higher positions for which polygraph tests were required, and to retire when promotion was mandated.
In 1983, CIA employee Edward Lee Howard was dismissed when, during a polygraph screening, he truthfully answered a series of questions admitting to minor crimes such as petty theft and drug abuse. In retaliation for his perceived unjust punishment for minor offenses, he later sold his knowledge of CIA operations to the Soviet Union.
Polygraph tests may not deter espionage. From 1945 to the present, at least six Americans have committed espionage while successfully passing polygraph tests. Notable cases of two men who created a false negative result with the polygraphs were Larry Wu-Tai Chin, who spied for China, and Aldrich Ames, who was given two polygraph examinations while with the CIA, the first in 1986 and the second in 1991, while spying for the Soviet Union/Russia. The CIA reported that he passed both examinations after experiencing initial indications of deception. According to a Senate investigation, an FBI review of the first examination concluded that the indications of deception were never resolved.
Ana Belen Montes, a Cuban spy, passed a counterintelligence scope polygraph test administered by the US Defense Intelligence Agency (DIA) in 1994.
Despite these errors, in August 2008, the DIA announced that it would subject each of its 5,700 prospective and current employees to polygraph testing at least once annually. This expansion of polygraph screening at DIA occurred while DIA polygraph managers ignored documented technical problems discovered in the Lafayette computerized polygraph system. The DIA uses computerized Lafayette polygraph systems for routine counterintelligence testing. The impact of the technical flaws within the Lafayette system on the analysis of recorded physiology and on the final polygraph test evaluation is currently unknown.
In 2012, a McClatchy investigation found that the National Reconnaissance Office was possibly breaching ethical and legal boundaries by encouraging its polygraph examiners to extract personal and private information from US Department of Defense personnel during polygraph tests that purported to be limited in scope to counterintelligence matters. Allegations of abusive polygraph practices were brought forward by former NRO polygraph examiners.
Alternative tests
Most polygraph researchers have focused more on the exam's predictive value on a subject's guilt. However, there have been no empirical theories established to explain how a polygraph measures deception. A 2010 study indicated that functional magnetic resonance imaging (fMRI) may benefit in explaining the psychological correlations of polygraph exams. It could also explain which parts of the brain are active when subjects use artificial memories. Most brain activity occurs in both sides of the prefrontal cortex, which is linked to response inhibition. This indicates that deception may involve inhibition of truthful responses. Some researchers believe that reaction time (RT) based tests may replace polygraphs in concealed information detection. RT based tests differ from polygraphs in stimulus presentation duration and can be conducted without physiological recording as subject response time is measured via computer. However, researchers have found limitations to these tests as subjects voluntarily control their reaction time, deception can still occur within the response deadline, and the test itself lacks physiological recording.
History
Earlier societies utilized elaborate methods of lie detection which mainly involved torture. For instance, in the Middle Ages, boiling water was used to detect liars, as it was believed honest men would withstand it better than liars. Early devices for lie detection include an 1895 invention of Cesare Lombroso used to measure changes in blood pressure for police cases, a 1904 device by Vittorio Benussi used to measure breathing, the Mackenzie-Lewis Polygraph first developed by James Mackenzie in 1906 and an abandoned project by American William Moulton Marston which used blood pressure to examine German prisoners of war (POWs). Marston said he found a strong positive correlation between systolic blood pressure and lying.
Marston wrote a second paper on the concept in 1915, when finishing his undergraduate studies. He entered Harvard Law School and graduated in 1918, re-publishing his earlier work in 1917. Marston's main inspiration for the device was his wife, Elizabeth Holloway Marston. "According to Marston’s son, it was his mother Elizabeth, Marston's wife, who suggested to him that 'When she got mad or excited, her blood pressure seemed to climb (Lamb, 2001). Although Elizabeth is not listed as Marston’s collaborator in his early work, Lamb, Matte (1996), and others refer directly and indirectly to Elizabeth's work on her husband's deception research. She also appears in a picture taken in his polygraph laboratory in the 1920s (reproduced in Marston, 1938).
Despite his predecessors' contributions, Marston styled himself the "father of the polygraph". (Today he is often equally or more noted as the creator of the comic book character Wonder Woman and her Lasso of Truth, which can force people to tell the truth.) Marston remained the device's primary advocate, lobbying for its use in the courts. In 1938 he published a book, The Lie Detector Test, wherein he documented the theory and use of the device. In 1938 he appeared in advertising by the Gillette company claiming that the polygraph showed Gillette razors were better than the competition.
A device recording both blood pressure and breathing was invented in 1921 by John Augustus Larson of the University of California and first applied in law enforcement work by the Berkeley Police Department under its nationally renowned police chief August Vollmer. Further work on this device was done by Leonarde Keeler. As Larson's protege, Keeler updated the device by making it portable and added the galvanic skin response to it in 1939. His device was then purchased by the FBI, and served as the prototype of the modern polygraph.
Several devices similar to Keeler's polygraph version included the Berkeley Psychograph, a blood pressure-pulse-respiration recorder developed by C. D. Lee in 1936 and the Darrow Behavior Research Photopolygraph, which was developed and intended solely for behavior research experiments.
A device which recorded muscular activity accompanying changes in blood pressure was developed in 1945 by John E. Reid, who claimed that greater accuracy could be obtained by making these recordings simultaneously with standard blood pressure-pulse-respiration recordings.
Society and culture
Portrayals in media
Lie detection has a long history in mythology and fairy tales; the polygraph has allowed modern fiction to use a device more easily seen as scientific and plausible. Notable instances of polygraph usage include uses in crime and espionage themed television shows and some daytime television talk shows, cartoons and films. Numerous TV shows have been called Lie Detector or featured the device. The first Lie Detector TV show aired in the 1950s, created and hosted by Ralph Andrews. In the 1960s Andrews produced a series of specials hosted by Melvin Belli. In the 1970s the show was hosted by Jack Anderson. In early 1983 Columbia Pictures Television put on a syndicated series hosted by F. Lee Bailey. In 1998 TV producer Mark Phillips with his Mark Phillips Philms & Telephision put Lie Detector back on the air on the FOX Network—on that program Ed Gelb with host Marcia Clark questioned Mark Fuhrman about the allegation that he "planted the bloody glove". In 2005 Phillips produced Lie Detector as a series for PAX/ION; some of the guests included Paula Jones, Reverend Paul Crouch accuser Lonny Ford, Ben Rowling, Jeff Gannon and Swift Boat Vet, Steve Garner.
In the UK, shows such as The Jeremy Kyle Show used polygraph tests extensively. The show was ultimately canceled when a participant committed suicide shortly after being polygraphed. The guest was slated by Kyle on the show for failing the polygraph, but no other evidence has come forward to prove any guilt. Producers later admitted in the inquiry that they were unsure on how accurate the tests performed were.
In the Fox game show The Moment of Truth, contestants are privately asked personal questions a few days before the show while hooked to a polygraph. On the show they asked the same questions in front of a studio audience and members of their family. In order to advance in the game they must give a "truthful" answer as determined by the previous polygraph exam.
Daytime talk shows, such as Maury Povich and Steve Wilkos, have used polygraphs to supposedly detect deception in interview subjects on their programs that pertain to cheating, child abuse, and theft.
In episode 93 of the US science show MythBusters, the hosts attempted to fool the polygraph by using pain when answering truthfully, in order to test the notion that polygraphs interpret truthful and non-truthful answers as the same. They also attempted to fool the polygraph by thinking pleasant thoughts when lying and thinking stressful thoughts when telling the truth, to try to confuse the machine. However, neither technique was successful for a number of reasons. Michael Martin correctly identified each guilty and innocent subject. Martin suggested that when conducted properly, polygraphs are correct 98% of the time, but no scientific evidence has been offered for this.
The history of the polygraph is the subject of the documentary film The Lie Detector, which first aired on American Experience on January 3, 2023.
Hand-held lie detector for US military
A hand-held lie detector is being deployed by the US Department of Defense according to a report in 2008 by investigative reporter Bill Dedman of NBC News. The Preliminary Credibility Assessment Screening System, or PCASS, captures less physiological information than a polygraph, and uses an algorithm, not the judgment of a polygraph examiner, to render a decision whether it believes the person is being deceptive or not. The device was first used in Afghanistan by US Army troops. The Department of Defense ordered its use be limited to non-US persons, in overseas locations only.
Notable cases
Polygraphy has been faulted for failing to trap known spies such as double-agent Aldrich Ames, who passed two polygraph tests while spying for the Soviet Union. Ames failed several tests while at the CIA that were never acted on. Other spies who passed the polygraph include Karl Koecher, Ana Montes, and Leandro Aragoncillo. CIA spy Harold James Nicholson failed his polygraph examinations, which aroused suspicions that led to his eventual arrest. Polygraph examination and background checks failed to detect Nada Nadim Prouty, who was not a spy but was convicted for improperly obtaining US citizenship and using it to obtain a restricted position at the FBI.
The polygraph also failed to catch Gary Ridgway, the "Green River Killer". Another suspect allegedly failed a given lie detector test, whereas Ridgway passed. Ridgway passed a polygraph in 1984; he confessed almost 20 years later when confronted with DNA evidence. Conversely, innocent people have been known to fail polygraph tests. In Wichita, Kansas in 1986, Bill Wegerle was suspected of murdering his wife Vicki Wegerle because he failed two polygraph tests (one administered by the police, the other conducted by an expert that Wegerle had hired), although he was neither arrested nor convicted of her death. In March 2004, evidence surfaced connecting her death to the serial killer known as BTK, and in 2005 DNA evidence from the Wegerle murder confirmed that BTK was Dennis Rader, exonerating Wegerle.
Prolonged polygraph examinations are sometimes used as a tool by which confessions are extracted from a defendant, as in the case of Richard Miller, who was persuaded to confess largely by polygraph results combined with appeals from a religious leader. In the Watts family murders, Christopher Watts failed one such polygraph test and subsequently confessed to murdering his wife. In the 2002 disappearance of seven-year-old Danielle van Dam of San Diego, police suspected neighbor David Westerfield; he became the prime suspect when he allegedly failed a polygraph test.
| Technology | Measuring instruments | null |
71758 | https://en.wikipedia.org/wiki/Hollow-point%20bullet | Hollow-point bullet | A hollow-point bullet is a type of expanding bullet which expands on impact with a soft target, transferring more or all of the projectile's energy into the target over a shorter distance.
Hollow-point bullets are used for controlled penetration, where overpenetration could cause collateral damage (such as aboard an aircraft). In target shooting, they are used for greater accuracy due to the larger meplat. They are more accurate and predictable compared to pointed bullets which, despite having a higher ballistic coefficient (BC), are more sensitive to bullet harmonic characteristics and wind deflection.
Plastic-tipped bullets are a type of (rifle) bullet meant to confer the aerodynamic advantage of the Spitzer bullet (for example, see very-low-drag bullet) and the stopping power of hollow-point bullets.
History
Solid lead bullets, when cast from a soft alloy, will often deform and provide some expansion if they hit the target at a high velocity. This, combined with the limited velocity and penetration attainable with muzzleloading firearms, meant there was little need for extra expansion.
The first hollow-point bullets were marketed in the late 19th century as express bullets and were hollowed out to reduce the bullet's mass and provide higher velocities. In addition to providing increased velocities, the hollow also turned out to provide significant expansion, especially when the bullets were cast in a soft lead alloy. Originally intended for rifles, the popular .32-20, .38-40, and .44-40 calibers could also be fired in revolvers.
With the advent of smokeless powder, velocities increased, and bullets got smaller, faster, and lighter. These new bullets (especially in rifles) needed to be jacketed to handle the conditions of firing. The new full metal jacket bullets tended to penetrate straight through a target causing less internal damage than a bullet that expands and stops in its target. This led to the development of the soft-point bullet and later jacketed hollow-point bullets at the British arsenal in Dum Dum, near Calcutta around 1890. Designs included the .303" Mk III, IV & V and the .455" Mk III "Manstopper" cartridges. Although such bullet designs were quickly outlawed for use in warfare (in 1898, the Germans complained they breached the Laws of War), they steadily gained ground among hunters due to the ability to control the expansion of the new high velocity cartridges. In modern ammunition, the use of hollow points is primarily limited to handgun ammunition, which tends to operate at much lower velocities than rifle ammunition (on the order of versus over 2,000 feet per second). At rifle velocities, a hollow point is not needed for reliable expansion and most rifle ammunition makes use of tapered jacket designs to achieve the mushrooming effect. At the lower handgun velocities, hollow point designs are generally the only design that will reliably expand.
Modern hollow-point bullet designs use many different methods to provide controlled expansion, including:
Jackets that are thinner near the front than the rear to allow easy expansion at the beginning, then a reduced expansion rate.
Partitions in the middle of the bullet core to stop expansion at a given point.
Bonding the lead core to the copper jacket to prevent separation and fragmentation.
Fluted or otherwise weakened jackets to encourage expansion or fragmentation.
Posts in the hollow cavity to cause hydraulic expansion of the bullet in tissue. While very effective in lightly clothed targets, these bullet types tend to plug up with heavy clothing materials that results in the bullet not expanding.
Solid copper hollow points, which are far stronger than jacketed lead, and provide controlled, uniform expansion even at high velocities.
Plastic inserts in the hollow, which provide the same profile as a full-metal-jacketed round (such as the Hornady V-Max bullet). The plastic insert initiates the expansion of the bullet by being forced into the hollow cavity upon impact.
Plastic inserts in the hollow to provide the same profile for feeding in semiautomatic and automatic weapons as a full-metal-jacketed round but that separate on firing while in flight or in the barrel (such as the German Geco "Action Safety" 9 mm round)
Mechanism
When a hollow-point hunting bullet strikes a soft target, the pressure created in the pit forces the material (usually lead) around the inside edge to expand outwards, increasing the axial diameter of the projectile as it passes through. This process is commonly referred to as mushrooming, because the resulting shape, a widened, rounded nose on top of a cylindrical base, typically resembles a mushroom.
The greater frontal surface area of the expanded bullet limits its depth of penetration into the target and causes more extensive tissue damage along the wound path. Many hollow-point bullets, especially those intended for use at high velocity in centerfire rifles, are jacketed, i.e., a portion of the lead-cored bullet is wrapped in a thin layer of harder metal, such as copper, brass, or mild steel. This jacket provides additional strength to the bullet, increases penetration, and can help prevent it from leaving deposits of lead inside the bore. In controlled expansion bullets, the jacket and other internal design characteristics help to prevent the bullet from breaking apart; a fragmented bullet will not penetrate as far.
Accuracy
Due to their design, hollow point bullets tend to be more accurate than other types of ammunition, as they are less affected by wind resistance and other factors that can affect trajectory. For bullets designed for target shooting, some such as the Sierra "Matchking" incorporate a cavity in the nose, called the meplat. This allows the manufacturer to maintain a greater consistency in tip shape and thus aerodynamic properties among bullets of the same design, at the expense of a slightly decreased ballistic coefficient and higher drag. The result is a slightly decreased overall accuracy between bullet trajectory and barrel direction, as well as an increased susceptibility to wind drift, but closer grouping of subsequent shots due to bullet consistency, often increasing the shooter's perceived accuracy.
The manufacturing process of hollow-point bullets also produces a flat, uniformly shaped base on the bullet which is thought to increase accuracy by providing a more consistent piston surface for the expanding gases of the cartridge.
Testing
Terminal ballistics testing of hollow point bullets are generally performed in ballistic gelatin, or some other medium intended to simulate tissue and cause a hollow point bullet to expand. Test results are generally given in terms of expanded diameter, penetration depth, and weight retention. Expanded diameter is an indication of the size of the wound cavity, penetration depth shows if vital organs could be reached by the bullet, and weight retention indicates how much of the bullet mass fragmented and separated from the main body of the bullet. How these factors are interpreted depends on the intended use of the bullet, and there are no universally agreed-upon ideal metrics.
Legislation
The Hague Convention of 1899, Declaration III, prohibited the use in international warfare of bullets that easily expand or flatten in the body. It is a common misapprehension that hollow-point ammunition is prohibited by the Geneva Conventions, as the prohibition significantly predates those conventions. The Saint Petersburg Declaration of 1868 banned exploding projectiles of less than 400 grams, along with weapons designed to aggravate injured soldiers or make their death inevitable.
Despite the widespread ban on military use, hollow-point bullets are one of the most common types of bullets used by civilians and police, which is due largely to the reduced risk of bystanders being hit by over-penetrating or ricocheted bullets, and the increased speed of incapacitation.
In many jurisdictions, even ones such as the United Kingdom, where expanding and any other kind of ammunition is only allowed to a Firearms certificate holder, it is illegal to hunt certain types of game with ammunition that does not expand.
United Kingdom
Most ammunition types, including hollow-point bullets, are only allowed to a section 1 firearms certificate (FAC) holder. The FAC holder must have the calibre in question as a valid allowance on their licence. A valid firearms certificate allows the holder to use ball, full metal jacket, hollow point and ballistic-tipped ammunition for range use and vermin control. A firearms certificate will only be issued to any individual who can provide good reason to the police for the possession of firearms and their ammunition. Until recently all expanding ammunition fell under section 5 of the Firearms Act 1968 and was only allowed when conditions were entered onto an FAC by the police. This condition would allow expanding ammunition to be used for:
The lawful shooting of deer
The shooting of vermin or, in the case of carrying on activities in connection with the management of any estate, other wildlife
The humane killing of animals
The shooting of animals for the protection of other animals or humans
Some ammunition types are still prohibited under section 5 of the Firearms Act 1968. Ammunition that explodes on impact or any ammunition that is intended for military use are examples of this.
Popular calibres used in the UK for vermin, fox and deer control are as follows: .223 Remington, .243 Winchester, .308 Winchester, .22-250 amongst others, all using hollow-point bullets. Many rimfire calibres also use expanding ammunition such as .22 Long Rifle, .22 Winchester Magnum Rimfire and .17 Hornady Magnum Rimfire.
United States
The United States is one of few major powers that did not agree to IV-3 of the Hague Convention of 1899, and thus is able to openly admit to the use of this kind of ammunition in warfare, but the United States ratified the second (1907) Hague Convention IV-23, which says "To employ arms, projectiles, or material calculated to cause unnecessary suffering", similar to IV-3 of the first Convention. For years the United States military respected this Convention and refrained from the use of expanding ammunition, and even made special FMJ .22LR ammunition for use in High Standard pistols that were issued to the OSS agents and the Savage Model 24 .22/.410 combination guns issued in the E series of air crew survival kits. After announcing consideration of using hollow point ammunition for side arms, with a possible start date of 2018, the United States Army began production of M1153 special purpose ammunition for the 9×19mm Parabellum with a jacketed hollow point bullet at per second for use in situations where limited over-penetration of targets is necessary to reduce collateral damage.
The state of New Jersey bans possession of hollow point bullets by civilians, except for ammunition possessed at one's own dwellings, premises, or other lands owned or possessed, or for, while and traveling to and from hunting with a hunting license if otherwise legal for the particular game. The law also requires all hollow point ammunition to be transported directly from the place of purchase to one's home or premises, or hunting area, or by members of a rifle or pistol club directly to a place of target practice, or directly to an authorized target range from the place of purchase or one's home or premises.
The United States military uses open-tip ammunition in some sniper rifles due to its exceptional accuracy. W. Hays Parks, Colonel, USMC, Chief of the JAG's International Law Branch, has argued that this ammunition is not prohibited by military convention in that the wounds that it produces are similar to full metal jacket ammunition in practice.
Winchester Black Talon scare
In early 1992, Winchester introduced the "Black Talon", a newly designed hollow-point handgun bullet which used a specially designed, reverse tapered jacket. The jacket was cut at the hollow to intentionally weaken it, and these cuts allowed the jacket to open into six petals upon impact. The thick jacket material kept the tips of the jacket from bending as easily as a normal thickness jacket. The slits that weakened the jacket left triangular shapes in the tip of the jacket, and these triangular sections of jacket would end up pointing out after expansion, leading to the "Talon" name. The bullets were coated with a black colored, paint-like lubricant called "Lubalox", and loaded into nickel-plated brass cases, which made them visually stand out from other ammunition. While performance of the Black Talon rounds was not significantly improved over other comparable high-performance hollow-point ammunition, the reverse taper jacket did provide reliable expansion under a wide range of conditions, and many police departments adopted the round.
Winchester's "Black Talon" product name was eventually used against them. After the high-profile 1993 101 California Street shooting in San Francisco, media response against Winchester was swift. "This bullet kills you better", says one report; "its six razorlike claws unfold on impact, expanding to nearly three times the bullet's diameter". A concern was raised by the president of the American College of Emergency Physicians (ACEP) that the sharp edges of the jacket could cut medical personnel's skin and risk spread of disease. An ACEP spokesman later said he was not aware of any evidence to support this claim.
Winchester responded to the media criticism of the Black Talon line by removing it from the commercial market and only selling it to law enforcement distributors. Winchester has since discontinued the sale of the Black Talon entirely, although Winchester does manufacture nearly identical ammunition under new brand names, the Ranger T-Series and the Supreme Elite Bonded PDX1.
| Technology | Ammunition | null |
71925 | https://en.wikipedia.org/wiki/Image | Image | An image is a visual representation. An image can be two-dimensional, such as a drawing, painting, or photograph, or three-dimensional, such as a carving or sculpture. Images may be displayed through other media, including a projection on a surface, activation of electronic signals, or digital displays; they can also be reproduced through mechanical means, such as photography, printmaking, or photocopying. Images can also be animated through digital or physical processes.
In the context of signal processing, an image is a distributed amplitude of color(s). In optics, the term image (or optical image) refers specifically to the reproduction of an object formed by light waves coming from the object.
A volatile image exists or is perceived only for a short period. This may be a reflection of an object by a mirror, a projection of a camera obscura, or a scene displayed on a cathode-ray tube. A fixed image, also called a hard copy, is one that has been recorded on a material object, such as paper or textile.
A mental image exists in an individual's mind as something one remembers or imagines. The subject of an image does not need to be real; it may be an abstract concept such as a graph or function or an imaginary entity. For a mental image to be understood outside of an individual's mind, however, there must be a way of conveying that mental image through the words or visual productions of the subject.
Characteristics
Two-dimensional images
The broader sense of the word 'image' also encompasses any two-dimensional figure, such as a map, graph, pie chart, painting, or banner. In this wider sense, images can also be rendered manually, such as by drawing, the art of painting, or the graphic arts (such as lithography or etching). Additionally, images can be rendered automatically through printing, computer graphics technology, or a combination of both methods.
A two-dimensional image does not need to use the entire visual system to be a visual representation. An example of this is a grayscale ("black and white") image, which uses the visual system's sensitivity to brightness across all wavelengths without taking into account different colors. A black-and-white visual representation of something is still an image, even though it does not fully use the visual system's capabilities.
On the other hand, some processes can be used to create visual representations of objects that are otherwise inaccessible to the human visual system. These include microscopy for the magnification of minute objects, telescopes that can observe objects at great distances, X-rays that can visually represent the interior structures of the human body (among other objects), magnetic resonance imaging (MRI), positron emission tomography (PET scans), and others. Such processes often rely on detecting electromagnetic radiation that occurs beyond the light spectrum visible to the human eye and converting such signals into recognizable images.
Three-dimensional images
Aside from sculpture and other physical activities that can create three-dimensional images from solid material, some modern techniques, such as holography, can create three-dimensional images that are reproducible but intangible to human touch. Some photographic processes can now render the illusion of depth in an otherwise "flat" image, but "3-D photography" (stereoscopy) or "3-D film" are optical illusions that require special devices such as eyeglasses to create the illusion of depth.
Moving images
"Moving" two-dimensional images are actually illusions of movement perceived when still images are displayed in sequence, each image lasting less, and sometimes much less, than a fraction of a second. The traditional standard for the display of individual frames by a motion picture projector has been 24 frames per second (FPS) since at least the commercial introduction of "talking pictures" in the late 1920s, which necessitated a standard for synchronizing images and sounds. Even in electronic formats such as television and digital image displays, the apparent "motion" is actually the result of many individual lines giving the impression of continuous movement.
This phenomenon has often been described as "persistence of vision": a physiological effect of light impressions remaining on the retina of the eye for very brief periods. Even though the term is still sometimes used in popular discussions of movies, it is not a scientifically valid explanation. Other terms emphasize the complex cognitive operations of the brain and the human visual system. "Flicker fusion", the "phi phenomenon", and "beta movement" are among the terms that have replaced "persistence of vision", though no one term seems adequate to describe the process.
Cultural and other uses
Image-making seems to have been common to virtually all human cultures since at least the Paleolithic era. Prehistoric examples of rock art—including cave paintings, petroglyphs, rock reliefs, and geoglyphs—have been found on every inhabited continent. Many of these images seem to have served various purposes: as a form of record-keeping; as an element of spiritual, religious, or magical practice; or even as a form of communication. Early writing systems, including hieroglyphics, ideographic writing, and even the Roman alphabet, owe their origins in some respects to pictorial representations.
Meaning and signification
Images of any type may convey different meanings and sensations for individual viewers, regardless of whether the image's creator intended them. An image may be taken simply as a more or less "accurate" copy of a person, place, thing, or event. It may represent an abstract concept, such as the political power of a ruler or ruling class, a practical or moral lesson, an object for spiritual or religious veneration, or an object—human or otherwise—to be desired. It may also be regarded for its purely aesthetic qualities, rarity, or monetary value. Such reactions can depend on the viewer's context. A religious image in a church may be regarded differently than the same image mounted in a museum. Some might view it simply as an object to be bought or sold. Viewers' reactions will also be guided or shaped by their education, class, race, and other contexts.
The study of emotional sensations and their relationship to any given image falls into the categories of aesthetics and the philosophy of art. While such studies inevitably deal with issues of meaning, another approach to signification was suggested by the American philosopher, logician, and semiotician Charles Sanders Peirce.
"Images" are one type of the broad category of "signs" proposed by Peirce. Although his ideas are complex and have changed over time, the three categories of signs that he distinguished stand out:
The "icon," which relates to an object by resemblance to some quality of the object. A painted or photographed portrait is an icon by virtue of its resemblance to the painting's or photograph's subject. A more abstract representation, such as a map or diagram, can also be an icon.
The "index," which relates to an object by some real connection. For example, smoke may be an index of fire, or the temperature recorded on a thermometer may be an index of a patient's illness or health.
The "symbol," which lacks direct resemblance or connection to an object but whose association is arbitrarily assigned by the creator or dictated by cultural and historical habit, convention, etc. The color red, for example, may connote rage, beauty, prosperity, political affiliation, or other meanings within a given culture or context; the Swedish film director Ingmar Bergman claimed that his use of the color in his 1972 film Cries and Whispers came from his personal visualization of the human soul.
A single image may exist in all three categories at the same time. The Statue of Liberty provides an example. While there have been countless two-dimensional and three-dimensional "reproductions" of the statue (i.e., "icons" themselves), the statue itself exists as
an "icon" by virtue of its resemblance to a human woman (or, more specifically, previous representations of the Roman goddess Libertas or the female model used by the artist Frederic-Auguste Bartholdi).
an "index" representing New York City or the United States of America in general due to its placement in New York Harbor, or with "immigration" from its proximity to the immigration center at Ellis Island.
a "symbol" as a visualization of the abstract concept of "liberty" or "freedom" or even "opportunity" or "diversity".
Critiques of imagery
The nature of images, whether three-dimensional or two-dimensional, created for a specific purpose or only for aesthetic pleasure, has continued to provoke questions and even condemnation at different times and places. In his dialogue, The Republic, the Greek philosopher Plato described our apparent reality as a copy of a higher order of universal forms. As copies of a higher reality, the things we perceive in the world, tangible or abstract, are inevitably imperfect. Book 7 of The Republic offers Plato's "Allegory of the Cave," where ordinary human life is compared to being a prisoner in a darkened cave who believes that shadows projected onto the cave's wall comprise actual reality. Since art is itself an imitation, it is a copy of that copy and all the more imperfect. Artistic images, then, not only misdirect human reason away from understanding the higher forms of true reality, but in imitating the bad behaviors of humans in depictions of the gods, they can corrupt individuals and society.
Echoes of such criticism have persisted across time, accelerating as image-making technologies have developed and expanded immensely since the invention of the daguerreotype and other photographic processes in the mid-19th century. By the late 20th century, works like John Berger's Ways of Seeing and Susan Sontag's On Photography questioned the hidden assumptions of power, race, sex, and class encoded in even realistic images, and how those assumptions and such images may implicate the viewer in the voyeuristic position of a (usually) male viewer. The documentary film scholar Bill Nichols has also studied how apparently "objective" photographs and films still encode assumptions about their subjects.
Images perpetuated in public education, media, and popular culture have a profound impact on the formation of such mental images:
Religious critiques
Despite, or perhaps because of, the widespread use of religious and spiritual imagery worldwide, the making of images and the depiction of gods or religious subjects has been subject to criticism, censorship, and criminal penalties. The Abrahamic religions (Judaism, Christianity, and Islam) all have had admonitions against the making of images, even though the extent of that proscription has varied with time, place, and sect or denomination of a given religion. In Judaism, one of the Ten Commandments given by God to Moses on Mount Sinai forbids the making of "any graven image, or any likeness [of any thing] that [is] in heaven above, or that [is] in the earth beneath, or that [is] in the water under earth." In Christian history, periods of iconoclasm (the destruction of images, especially those with religious meanings or connotations) have broken out from time to time, and some sects and denominations have rejected or severely limited the use of religious imagery. Islam tends to discourage religious depictions, sometimes quite rigorously, and often extends that to other forms of realistic imagery, favoring calligraphy or geometric designs instead. Depending on time and place, photographs and broadcast images in Islamic societies may be less subject to outright prohibition. In any religion, restrictions on image-making are especially targeted to avoid depictions of "false gods" in the form of idols. In recent years, militant extremist groups such as the Taliban and ISIS have destroyed centuries-old artifacts, especially those associated with other religions.
In culture
Virtually all cultures have produced images and applied different meanings or applications to them. The loss of knowledge about the context and connection of an image to its object is likely to result in different perceptions and interpretations of the image and even of the original object itself.
Through human history, one dominant form of imagery has been in relation to religion and spirituality. Such images, whether in the form of idols that are objects of worship or that represent some other spiritual state or quality, have a different status as artifacts when copies of such images sever links to the spiritual or supernatural. The German philosopher and essayist Walter Benjamin brought particular attention to this point in his 1935 essay "The Work of Art in the Age of Mechanical Reproduction."
Benjamin argues that the mechanical reproduction of images, which had accelerated through photographic processes in the previous one hundred years or so, inevitably degrades the "authenticity" or quasi-religious "aura" of the original object. One example is Leonardo da Vinci's Mona Lisa, originally painted as a portrait, but much later, with its display as an art object, it developed a "cult" value as an example of artistic beauty. Following years of various reproductions of the painting, the portrait's "cult" status has little to do with its original subject or the artistry. It has become famous for being famous, while at the same time, its recognizability has made it a subject to be copied, manipulated, satirized, or otherwise altered in forms ranging from Marcel Duchamp's L.H.O.O.Q. to Andy Warhol's multiple silk-screened reproductions of the image.
In modern times, the development of "non-fungible tokens" (NFTs) has been touted as an attempt to create "authentic" or "unique" images that have a monetary value, existing only in digital format. This assumption has been widely debated.
Other considerations
The development of synthetic acoustic technologies and the creation of sound art have led to considering the possibilities of a sound-image made up of irreducible phonic substance beyond linguistic or musicological analysis.
Still or moving
A is a single static image. This phrase is used in photography, visual media, and the computer industry to emphasize that one is not talking about movies, or in very precise or pedantic technical writing such as a standard.
A is typically a movie (film) or video, including digital video. It could also be an animated display, such as a zoetrope.
A still frame is a still image derived from one frame of a moving one. In contrast, a film still is a photograph taken on the set of a movie or television program during production, used for promotional purposes.
In image processing, a picture function is a mathematical representation of a two-dimensional image as a function of two spatial variables. The function f(x,y) describes the intensity of the point at coordinates (x,y).
Literature
In literature, a "mental image" may be developed through words and phrases to which the senses respond. It involves picturing an image mentally, also called imagining, hence imagery. It can both be figurative and literal.
| Technology | Media and communication | null |
71940 | https://en.wikipedia.org/wiki/Bell%20pepper | Bell pepper | The bell pepper (also known as sweet pepper, pepper, capsicum or, in some parts of the US midwest, mango) is the fruit of plants in the Grossum Group of the species Capsicum annuum. Cultivars of the plant produce fruits in different colors, including red, yellow, orange, green, white, chocolate, candy cane striped, and purple. Bell peppers are sometimes grouped with less pungent chili varieties as "sweet peppers". While they are botanically fruits—classified as berries—they are commonly used as a vegetable ingredient or side dish. Other varieties of the genus Capsicum are categorized as chili peppers when they are cultivated for their pungency, including some varieties of Capsicum annuum.
Peppers are native to Mexico, Central America, the Caribbean and northern South America. Pepper seeds were imported to Spain in 1493 and then spread through Europe and Asia. Preferred growing conditions for bell peppers include warm, moist soil in a temperature range of .
Nomenclature
The name pepper was given by Europeans when Christopher Columbus brought the plant back to Europe. At that time, black pepper (peppercorns), from the unrelated plant Piper nigrum originating from India, was a highly prized condiment. The name pepper was applied in Europe to all known spices with a hot and pungent taste and was therefore extended to genus Capsicum when it was introduced from the Americas. The most commonly used name of the plant family chile is of Mexican origin, from the Nahuatl word chilli.
The terms bell pepper (US, Canada, Philippines), pepper or sweet pepper (UK, Ireland, Canada, South Africa, Zimbabwe), and capsicum (Australia, Bangladesh, India, Malaysia, New Zealand, Pakistan and Sri Lanka) are often used for any of the large bell-shaped peppers, regardless of their color. The fruit is simply referred to as a "pepper", or additionally by color ("green pepper" or red, yellow, orange, purple, brown, black). In the Midland region of the U.S., bell peppers, either fresh or when stuffed and pickled, are sometimes called mangoes.
In some languages, the term paprika, which has its roots in the word for pepper, is used for both the spice and the fruit – sometimes referred to by their color (for example groene paprika, gele paprika, in Dutch, which are green and yellow, respectively). The bell pepper is called "パプリカ" (papurika) or "ピーマン" (pīman, from French piment pronounced with a silent 't') in Japan. In Switzerland, the fruit is mostly called peperone, which is the Italian name of the fruit. In France, it is called poivron, with the same root as poivre (meaning "pepper") or piment. In Spain it is called pimiento morrón, the masculine form of the traditional spice, pimienta and "morrón" (snouted) referring to its general shape. In South Korea, the word "피망" (pimang from the French piment) refers to green bell peppers, whereas "파프리카" (papeurika, from paprika) refers to bell peppers of other colors. In Sri Lanka, both the bell pepper and the banana pepper are referred to as a "capsicum" since the bell pepper has no Sinhalese translation. In Argentina and Chile, it is called "morrón".
Colors
The most common colors of bell peppers are green, yellow, orange and red. Other colors include brown, white, lavender, and dark purple, depending on the variety. Most typically, unripe fruits are green or, less commonly, pale yellow or purple. Red bell peppers are simply ripened green peppers, although the Permagreen variety maintains its green color even when fully ripe. Therefore, mixed colored peppers also exist during parts of the ripening process.
Uses
Culinary
Like the tomato, bell peppers are botanical fruits and culinary vegetables. Pieces of bell pepper are commonly used in garden salads and as toppings on pizza. There are many varieties of stuffed peppers prepared using hollowed or halved bell peppers. Bell peppers (and other cultivars of Capsicum annuum) may be used in the production of the spice paprika.
Nutrition
A raw red bell pepper is 94% water, 5% carbohydrates, 1% protein, and contains negligible fat. A 100 gram (3.5 ounce) reference amount supplies 26 calories, and is a rich source of vitamin C containing 158% of the Daily Value (DV) vitamin A (20%), and vitamin B6 (23% DV), with moderate contents of riboflavin (12%), folate (12% DV), and vitamin E (11% DV). A red bell pepper supplies twice the vitamin C and eight times the vitamin A content of a green bell pepper.
The bell pepper is the only member of the genus Capsicum that does not produce capsaicin, a lipophilic chemical that can cause a strong burning sensation when it comes in contact with mucous membranes. Bell peppers are thus scored in the lowest level of the Scoville scale, meaning that they are not spicy. This absence of capsaicin is due to a recessive form of a gene that eliminates the compound and, consequently, the "hot" taste usually associated with the rest of the genus Capsicum. This recessive gene is overwritten in the Mexibelle pepper, a hybrid variety of bell pepper that produces small amounts of capsaicin (and is thus mildly pungent). Conversely, a mutant strain of habanero has been bred to create a heatless version called the 'Habanada'. Sweet pepper cultivars produce non-pungent capsaicinoids.
Production
In 2020, global production of bell peppers was 36 million tonnes, led by China with 46% of the total, and secondary production by Mexico, Indonesia, and Turkey. The United States ranks 5th in total production, as it produces approximately 1.6 billion pounds annually.
| Biology and health sciences | Solanales | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.