id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
384743 | https://en.wikipedia.org/wiki/Cyclopropane | Cyclopropane | Cyclopropane is the cycloalkane with the molecular formula (CH2)3, consisting of three methylene groups (CH2) linked to each other to form a triangular ring. The small size of the ring creates substantial ring strain in the structure. Cyclopropane itself is mainly of theoretical interest but many of its derivatives - cyclopropanes - are of commercial or biological significance.
Cyclopropane was used as a clinical inhalational anesthetic from the 1930s through the 1980s. The substance's high flammability poses a risk of fire and explosions in operating rooms due to its tendency to accumulate in confined spaces, as its density is higher than that of air.
History
Cyclopropane was discovered in 1881 by August Freund, who also proposed the correct structure for the substance in his first paper. Freund treated 1,3-dibromopropane with sodium, causing an intramolecular Wurtz reaction leading directly to cyclopropane. The yield of the reaction was improved by Gustavson in 1887 with the use of zinc instead of sodium. Cyclopropane had no commercial application until Henderson and Lucas discovered its anaesthetic properties in 1929; industrial production had begun by 1936. In modern anaesthetic practice, it has been superseded by other agents.
Anaesthesia
Cyclopropane was introduced into clinical use by the American anaesthetist Ralph Waters who used a closed system with carbon dioxide absorption to conserve this then-costly agent.
Cyclopropane is a relatively potent, non-irritating and sweet smelling agent with a minimum alveolar concentration of 17.5% and a blood/gas partition coefficient of 0.55. This meant induction of anaesthesia by inhalation of cyclopropane and oxygen was rapid and not unpleasant. However at the conclusion of prolonged anaesthesia patients could suffer a sudden decrease in blood pressure, potentially leading to cardiac dysrhythmia: a reaction known as "cyclopropane shock". For this reason, as well as its high cost and its explosive nature, it was latterly used only for the induction of anaesthesia, and has not been available for clinical use since the mid-1980s.
Cylinders and flow meters were colored orange.
Pharmacology
Cyclopropane is inactive at the GABAA and glycine receptors, and instead acts as an NMDA receptor antagonist. It also inhibits the AMPA receptor and nicotinic acetylcholine receptors, and activates certain K2P channels.
Structure and bonding
The triangular structure of cyclopropane requires the bond angles between carbon-carbon covalent bonds to be 60°. The molecule has D3h molecular symmetry. The C-C distances are 151 pm versus 153-155 pm.
Despite their shortness, the C-C bonds in cyclopropane are weakened by 34 kcal/mol vs ordinary C-C bonds. In addition to ring strain, the molecule also has torsional strain due to the eclipsed conformation of its hydrogen atoms. The C-H bonds in cyclopropane are stronger than ordinary C-H bonds as reflected by NMR coupling constants.
Bonding between the carbon centres is generally described in terms of bent bonds. In this model the carbon-carbon bonds are bent outwards so that the inter-orbital angle is 104°.
The unusual structural properties of cyclopropane have spawned many theoretical discussions. One theory invokes σ-aromaticity: the stabilization afforded by delocalization of the six electrons of cyclopropane's three C-C σ bonds to explain why the strain of cyclopropane is "only" 27.6 kcal/mol as compared to cyclobutane (26.2 kcal/mol) with cyclohexane as reference with Estr=0 kcal/mol, in contrast to the usual π aromaticity, that, for example, has a highly stabilizing effect in benzene. Other studies do not support the role of σ-aromaticity in cyclopropane and the existence of an induced ring current; such studies provide an alternative explanation for the energetic stabilization and abnormal magnetic behaviour of cyclopropane.
Synthesis
Cyclopropane was first produced via a Wurtz coupling, in which 1,3-dibromopropane was cyclised using sodium. The yield of this reaction can be improved by the use of zinc as the dehalogenating agent and sodium iodide as a catalyst.
BrCH2CH2CH2Br + 2 Na → (CH2)3 + 2 NaBr
The preparation of cyclopropane rings is referred to as cyclopropanation.
Reactions
Owing to the increased π-character of its C-C bonds, cyclopropane is often assumed to add bromine to give 1,3-dibromopropane, but this reaction proceeds poorly. Hydrohalogenation with hydrohalic acids gives linear 1-halopropanes. Substituted cyclopropanes also react, following Markovnikov's rule.
Cyclopropane and its derivatives can oxidatively add to transition metals, in a process referred to as C–C activation.
Safety
Cyclopropane is highly flammable. However, despite its strain energy it does not exhibit explosive behavior substantially different from other alkanes.
| Physical sciences | Aliphatic hydrocarbons | Chemistry |
384744 | https://en.wikipedia.org/wiki/Cyclohexane | Cyclohexane | Cyclohexane is a cycloalkane with the molecular formula . Cyclohexane is non-polar. Cyclohexane is a colourless, flammable liquid with a distinctive detergent-like odor, reminiscent of cleaning products (in which it is sometimes used). Cyclohexane is mainly used for the industrial production of adipic acid and caprolactam, which are precursors to nylon.
Cyclohexyl () is the alkyl substituent of cyclohexane and is abbreviated Cy.
Production
Cyclohexane is one of components of naphtha, from which it can be extracted by advanced distillation methods. Distillation is usually combined with isomerization of methylcyclopentane, a similar component extracted from naphtha by similar methods. Together, these processes cover only a minority (15-20%) of the modern industrial demand, and are complemented by synthesis.
Modern industrial synthesis
On an industrial scale, cyclohexane is produced by hydrogenation of benzene in the presence of a Raney nickel catalyst. Producers of cyclohexane account for approximately 11.4% of global demand for benzene. The reaction is highly exothermic, with ΔH(500 K) = -216.37 kJ/mol. Dehydrogenation commenced noticeably above 300 °C, reflecting the favorable entropy for dehydrogenation.
History of synthesis
Unlike benzene, cyclohexane is not found in natural resources such as coal. For this reason, early investigators synthesized their cyclohexane samples.
Failure
In 1867 Marcellin Berthelot reduced benzene with hydroiodic acid at elevated temperatures.
In 1870, Adolf von Baeyer repeated the reaction and pronounced the same reaction product "hexahydrobenzene".
In 1890 Vladimir Markovnikov believed he was able to distill the same compound from Caucasus petroleum, calling his concoction "hexanaphtene".
Surprisingly, their cyclohexanes boiled higher by 10 °C than either hexahydrobenzene or hexanaphthene, but this riddle was solved in 1895 by Markovnikov, N.M. Kishner, and Nikolay Zelinsky when they reassigned "hexahydrobenzene" and "hexanaphtene" as methylcyclopentane, the result of an unexpected rearrangement reaction.
Success
In 1894, Baeyer synthesized cyclohexane starting with a ketonization of pimelic acid followed by multiple reductions:
In the same year, E. Haworth and W.H. Perkin Jr. (1860–1929) prepared it via a Wurtz reaction of 1,6-dibromohexane.
Reactions and uses
Although rather unreactive, cyclohexane undergoes autoxidation to give a mixture of cyclohexanone and cyclohexanol. The cyclohexanone–cyclohexanol mixture, called "KA oil", is a raw material for adipic acid and caprolactam, precursors to nylon. Several million kilograms of cyclohexanone and cyclohexanol are produced annually.
It is used as a solvent in some brands of correction fluid. Cyclohexane is sometimes used as a non-polar organic solvent, although n-hexane is more widely used for this purpose. It is frequently used as a recrystallization solvent, as many organic compounds exhibit good solubility in hot cyclohexane and poor solubility at low temperatures.
Cyclohexane is also used for calibration of differential scanning calorimetry (DSC) instruments, because of a convenient crystal-crystal transition at −87.1 °C.
Cyclohexane vapour is used in vacuum carburizing furnaces, in heat treating equipment manufacture.
Conformation
The 6-vertex edge ring does not conform to the shape of a perfect hexagon. The conformation of a flat 2D planar hexagon has considerable strain because the C-H bonds would be eclipsed. Therefore, to reduce torsional strain, cyclohexane adopts a three-dimensional structure known as the chair conformation, which rapidly interconvert at room temperature via a process known as a chair flip. During the chair flip, there are three other intermediate conformations that are encountered: the half-chair, which is the most unstable conformation, the more stable boat conformation, and the twist-boat, which is more stable than the boat but still much less stable than the chair. The chair and twist-boat are energy minima and are therefore conformers, while the half-chair and the boat are transition states and represent energy maxima. The idea that the chair conformation is the most stable structure for cyclohexane was first proposed as early as 1890 by Hermann Sachse, but only gained widespread acceptance much later. The new conformation puts the carbons at an angle of 109.5°. Half of the hydrogens are in the plane of the ring (equatorial) while the other half are perpendicular to the plane (axial). This conformation allows for the most stable structure of cyclohexane. Another conformation of cyclohexane exists, known as boat conformation, but it interconverts to the slightly more stable chair formation. If cyclohexane is mono-substituted with a large substituent, then the substituent will most likely be found attached in an equatorial position, as this is the slightly more stable conformation.
Cyclohexane has the lowest angle and torsional strain of all the cycloalkanes; as a result cyclohexane has been deemed a 0 in total ring strain.
Solid phases
Cyclohexane has two crystalline phases. The high-temperature phase I, stable between 186 K and the melting point 280 K, is a plastic crystal, which means the molecules retain some rotational degree of freedom. The low-temperature (below 186 K) phase II is ordered. Two other low-temperature (metastable) phases III and IV have been obtained by application of moderate pressures above 30 MPa, where phase IV appears exclusively in deuterated cyclohexane (application of pressure increases the values of all transition temperatures).
Here Z is the number structure units per unit cell; the unit cell constants a, b and c were measured at the given temperature T and pressure P.
| Physical sciences | Aliphatic hydrocarbons | Chemistry |
384749 | https://en.wikipedia.org/wiki/Alicyclic%20compound | Alicyclic compound | In organic chemistry, an alicyclic compound contains one or more all-carbon rings which may be either saturated or unsaturated, but do not have aromatic character. Alicyclic compounds may have one or more aliphatic side chains attached.
The simplest alicyclic compounds are the monocyclic cycloalkanes: cyclopropane, cyclobutane, cyclopentane, cyclohexane, cycloheptane, cyclooctane, and so on. Bicyclic alkanes include bicycloundecane, decalin, and housane. Polycyclic alkanes include cubane, basketane, and tetrahedrane.
Spiro compounds have two or more rings that are connected through only one carbon atom.
The mode of ring-closing in the formation of many alicyclic compounds can be predicted by Baldwin's rules.
Otto Wallach, a German chemist, received the 1910 Nobel Prize in Chemistry for his work on alicyclic compounds.
Cycloalkenes
Monocyclic cycloalkenes are cyclopropene, cyclobutene, cyclopentene, cyclohexene, cycloheptene, cyclooctene, and so on. Bicyclic alkenes include norbornene and norbornadiene.
Two more examples are shown below, methylenecyclohexane on the left and 1-methylcyclohexene on the right:
An exocyclic group is always shown outside the ring structure, take for instance the exocyclic double bond of the former molecule. Isotoluenes are a prominent class of compounds with exocyclic double bonds.
The placement of double bonds in many alicyclic compounds can be predicted with Bredt's rule.
| Physical sciences | Aliphatic hydrocarbons | Chemistry |
384805 | https://en.wikipedia.org/wiki/Manhattan%20Bridge | Manhattan Bridge | The Manhattan Bridge is a suspension bridge that crosses the East River in New York City, connecting Lower Manhattan at Canal Street with Downtown Brooklyn at the Flatbush Avenue Extension. Designed by Leon Moisseiff and built by the Phoenix Bridge Company, the bridge has a total length of . The bridge is one of four vehicular bridges directly connecting Manhattan Island and Long Island; the nearby Brooklyn Bridge is just slightly farther west, while the Queensboro and Williamsburg bridges are to the north.
The bridge was proposed in 1898 and was originally called "Bridge No. 3" before being renamed the Manhattan Bridge in 1902. Foundations for the bridge's suspension towers were completed in 1904, followed by the anchorages in 1907 and the towers in 1908. The Manhattan Bridge opened to traffic on December 31, 1909, and began carrying streetcars in 1912 and New York City Subway trains in 1915. The eastern upper-deck roadway was installed in 1922. After streetcars stopped running in 1929, the western upper roadway was finished two years later. The uneven weight of subway trains crossing the Manhattan Bridge caused it to tilt to one side, necessitating an extensive reconstruction between 1982 and 2004.
The Manhattan Bridge was the first suspension bridge to use a Warren truss in its design. It has a main span of between two suspension towers. The deck carries seven vehicular lanes, four on an upper level and three on a lower level, as well as four subway tracks, two each flanking the lower-level roadway. The span is carried by four main cables, which travel between masonry anchorages at either side of the bridge, and 1,400 vertical suspender cables. Carrère and Hastings designed ornamental plazas at both ends of the bridge, including an arch and colonnade in Manhattan that is a New York City designated landmark. The bridge's use of light trusses influenced the design of other long suspension bridges in the early 20th century.
Development
The bridge was the last of the three suspension spans built across the lower East River, following the Brooklyn and Williamsburg bridges. After the City of Greater New York was formed in 1898, the administration of mayor Robert Anderson Van Wyck formed a plan for what became the Manhattan Bridge; these plans were repeatedly revised and were not finalized until after George B. McClellan Jr. became mayor in 1901. From the outset, the bridge was planned to have a central roadway, streetcar tracks, elevated tracks, and sidewalks, and it was to run straight onto an extension of Flatbush Avenue in Brooklyn.
In the earliest plans it was to have been called "Bridge No. 3", but was given the name Manhattan Bridge in 1902. When the name was confirmed in 1904, The New York Times criticized it as "meaningless", lobbied for one after Brooklyn's Wallabout Bay, and railed that the span "would have geographical and historical significance if it were known as the Wallabout Bridge". In 1905, the Times renewed its campaign, stating, "All bridges across the East River are Manhattan bridges. When there was only one, it was well enough to call it the Brooklyn Bridge, or the East River Bridge".
Planning and caissons
The earliest plans for what became the Manhattan Bridge were designed by R. S. Buck. These plans called for a suspension bridge with carbon steel wire cables and a suspended stiffening truss, supported by a pair of towers with eight braced legs. This design would have consisted of a main span of and approaches of each. In early 1901, the city government approved a motion to acquire land for a suspension tower in Brooklyn; the city shortly began soliciting bids for the tower's foundations. The contract for the Brooklyn suspension tower was awarded in May 1901.
The caisson under the tower on the Brooklyn side was installed in March 1902; workers excavated dirt for the foundations from within the caisson, a process that was completed in December 1902. Three workers had died while working on the Brooklyn-side tower's caisson. A plan for the bridge was announced in early 1903. Elevated and trolley routes would use the Manhattan Bridge, and there would be large balconies and enormous spaces within the towers' anchorages. Work on the Manhattan caisson had commenced in January 1903; it was towed to position in July, and the caisson work was completed by January 1904. The foundations were completed in March 1904. A $10 million grant for the bridge's construction was granted in May 1904 with the expectation that work on the bridge would start later that year.
The Municipal Art Commission raised objections to one of the bridge's plans in June 1904, which delayed the start of construction. Another set of plans was unveiled that month by New York City Bridge Commissioner Gustav Lindenthal, in conjunction with Henry Hornbostel. The proposal also called for each of the suspension towers to be made of four columns, to be braced transversely and hinged to the bottom of the abutments longitudinally. The same span dimensions from Buck's plan were used because work on the masonry pier foundations had already begun. Additionally, the towers would have contained Modern French detail, while the anchorages would have been used for functions such as meeting halls. Lindenthal's plan was also rejected due to a dispute over whether his plan, which used eyebars, was better than the more established practice of using wire cables. The Municipal Art Commission voted in September 1904 to use wire cables on the bridge.
Lindenthal was ultimately dismissed and a new design was commissioned from Leon Moisseiff. George Best replaced Lindenthal as the city's bridge commissioner and discarded the eyebar plans in favor of the wire cables. Hornbostel was replaced by Carrère and Hastings as architectural consultants. By late 1904, the disputes over the types of cables had delayed the contract for the bridge's superstructure (composed of its towers and deck). The bridge's completion had been delayed by two years, and its cost had increased by $2 million. The cable dispute was not fully resolved until 1906, when Best's successor James W. Stevenson announced that the bridge would use wire cables.
Anchorages, towers, and approach viaducts
Best reviewed bids for the construction of the anchorages in December 1904. The Williams Engineering Company received the $2 million contract for the anchorages' construction. Construction commenced on the Brooklyn anchorage in February 1905 and on the Manhattan anchorage that April. The foundation subcontractors excavated the foundations of each anchorage using sheet pilings. Barges were used to transport material from the East River to the anchorages' sites. Mixers constructed the masonry for the anchorages at a rate of up to per day. During mid-1905, officials condemned land in Manhattan and Brooklyn for the bridge's approaches; the land acquisition was partially delayed because the contractors rented out houses that were supposed to be demolished. By the end of the year, the city's bridge department was planning to erect streetcar terminal buildings at either end of the bridge.
To avoid the delays that had occurred during the Williamsburg Bridge's construction, Best planned to award a single large contract for the towers and the deck, rather than splitting the work into multiple contracts. He began soliciting bids for the metalwork in July 1905, at which point the bridge was to use of metal. The Pennsylvania Steel Company received the contract in August 1905 after submitting a low bid of $7.248 million, and a competing bidder sued to prevent the contract from being awarded to Pennsylvania Steel. In November, a New York Supreme Court judge ruled that the contract with Pennsylvania Steel was illegal, as the bidding process had been designed to shut out other bidders. Although Best tried to appeal the Supreme Court's decision, the contract was re-advertised anyway; Pennsylvania Steel refused to submit another bid. When Stevenson became the bridge commissioner at the beginning of 1906, he ordered that new bridge specifications be created. Stevenson received bids for the steelwork in May 1906, and the Ryan-Parker Construction Company received the contract the next month, following delays caused by an injunction and threats of lawsuits.
The Ryan-Parker Company hired the Phoenix Bridge Company in September 1906 to fabricate the steelwork. The Phoenix Bridge Company's 2,000 workers began making beams, girders, eyebars, and other parts of the bridge at the firm's factory in Phoenixville, Pennsylvania. The anchorages were less than half complete, in part because of inclement weather and material shortages. That November, the Board of Estimate and Apportionment approved $4 million for land acquisition in Manhattan and $300,000 for land acquisition in Brooklyn. By early 1907, the city had spent over $6 million on the bridge; the bridge's total cost was estimated at $20 million. To speed up the bridge's completion, Manhattan borough president Bird Sim Coler considered implementing night shifts. By February 1907, the Phoenix Bridge Company was manufacturing steel faster than it could be installed, and the steel for the anchorages was done. The company had also begun fabricating beams for the towers. Land acquisition for an extension of Flatbush Avenue to the bridge began in March, and the first steel girders of the towers were lifted in place the next month. The first steel pedestals for the towers were installed on June 26, 1907. The anchorages were nearly done by late 1907; they could not be completed until the cables were finished.
The city government acquired land for the approaches in October 1907; this required the relocation of several hundred families in Brooklyn and nearly 1,000 families in Manhattan. In total, about 145 lots in Brooklyn and 173 lots in Manhattan were obtained for the bridge's approaches and plazas. Some Brooklyn residents requested additional time to relocate. Residents in the path of the Manhattan approach also protested efforts to evict them, though they were relocated at the beginning of December 1907. Later that month, four companies submitted bids for the construction of the bridge's Manhattan and Brooklyn approach spans. John C. Rodgers submitted a low bid of $2.17 million for the viaducts, and Stevenson requested that amount from the Board of Estimate. By the beginning of 1908, most of the land had been cleared, and the suspension towers had been built to above the height of the deck. The Manhattan tower was finished that March, followed by the Brooklyn tower the next month. Land acquisition was nearly done by the middle of that year.
Cables and deck
Andrew McC. Parker of the Ryan-Parker Company had predicted in January 1908 that the cables would be strung within two months. The Roebling & Sons Company started manufacturing the wires for the cables before the towers were finished, while the Glyndon Contracting Company was hired to lay the wires. Around of nickel steel wires were manufactured at the Carbon Steel Works in Pittsburgh. Workers began stringing temporary cables on June 15, 1908; the first wire broke loose while it was being strung, injuring two people. By this time, the construction cost had increased to $22 million. The temporary cables supported temporary footbridges between each tower, which were completed in mid-July. When the footbridges were finished, workers installed guide wires, which were laid as continuous loops. Two guide wheels, one at either end of each guide wire, carried the main cables' wires across the river between each anchorage. These wheels were powered by a motor atop the Brooklyn anchorage. In addition, reels of wire were stored at both ends of the bridge. The guide wheels laid up to of wire every day.
The last wires for the main cables were strung in December 1908. That month, the Board of Estimate and Apportionment hired engineer Ralph Modjeski to review the engineering drawings for the Manhattan Bridge, after the City Club of New York expressed concerns over the bridge's safety. Afterward, the Glyndon Construction Company installed the vertical suspender cables, which were hung from the main cables. By the beginning of 1909, the bridge was planned to open at the end of the year, but the subway tracks, streetcar tracks, and Flatbush Avenue Extension were not complete. Around of red steel girders and floor panels for the bridge's deck had been delivered to a yard in Bayonne, New Jersey. The girders and panels were delivered to the bridge's site starting in February 1909, and the first floor panel in the main span was installed the same month. Each of the girders was hung from a pair of suspender cables, and floor panels were hung between the girders at a rate of four panels a day. It took workers three weeks to install the floor panels; and the last panel was installed on April 7, 1909.
The bridge commissioner received $1 million from the Board of Estimate and Apportionment for the completion of the roadway, subway tracks, and other design details. The trusses and side spans were built after the floor of the main span was completed. Carbon Steel began wrapping the main cables together in May 1909; the wrapping process required of wire, and the company was able to wrap five to seven segments of cables per day. All work on the cables was finished in August 1909, almost exactly a year after the first strand of the first main cable was strung. Workers then installed ornamentation on the tops of the towers and bronze collars on each of the main cables. Modjeski reported that September that the bridge was safe. At the time, the plazas were incomplete, and Flatbush Avenue Extension was unpaved; the bridge commissioner was razing buildings near the Manhattan plaza by that November. The Brooklyn Daily Eagle reported that there was widespread discontent over the fact that streetcar and subway service would not be ready for the bridge's opening.
Operational history
Opening and early history
Stevenson announced at the end of November 1909 that the bridge's roadways would likely open by December 24, although the transit lines and pedestrian walkways were not complete. One hundred prominent Brooklyn citizens walked over the bridge on December 4, 1909; at the time, the subway tracks were unfinished, and there was uncertainty over which company would use the streetcar tracks. Outgoing mayor George B. McClellan Jr. toured the bridge on December 24. The span officially opened on December 31, 1909, at a final cost of $26 million, although work was still incomplete. Initially, motorists had to pay a ten-cent toll, the same as the toll on the Brooklyn Bridge. Empty commercial vehicles tended to use the Manhattan Bridge, while trucks with full loads used the Brooklyn Bridge, since the Manhattan Bridge's wood-block pavement was less sturdy than the Brooklyn Bridge's plank pavement.
A fire on the Brooklyn side damaged the bridge in early 1910, necessitating the replacement of some cables and steel. Though both of the Manhattan Bridge's footpaths were initially closed to the public, the northern footpath opened in July 1910; the southern footpath was scheduled to be opened the next month. Shortly after the Manhattan Bridge opened, the city government conducted a study and found that it had no authority to charge tolls on the Manhattan and Queensboro bridges. Tolls on the Manhattan Bridge, as well as the Queensboro, Williamsburg, and Brooklyn bridges, were abolished in July 1911 as part of a populist policy initiative headed by New York City mayor William Jay Gaynor. Streetcars began running across the bridge in September 1912, and the bridge's subway tracks opened in June 1915. By the mid-1910s, a food market operated under the bridge. Meanwhile, C. J. Sullivan sued the Ryan-Parker Construction Company, claiming that he had helped the company secure the general contract for the bridge. He was awarded just over $300,000 in 1912, an amount that was increased to over $380,000 in 1916.
After the bridge opened, Carrère and Hastings drew up preliminary plans for a Beaux Arts-style entrance to the bridge in Manhattan and a smaller approach on the Brooklyn side. The city's Municipal Art Commission approved a $700,000 plan for the bridge's Manhattan approach in April 1910. The final plans were approved in 1912, and construction began the same year. The city allocated $675,000 for a plaza at the Brooklyn end in March 1913, including a subway tunnel under the plaza, and the Northeastern Construction Company submitted the low bid for the plaza's construction. The arch and colonnade were completed in 1915, while the pylons on the Brooklyn side were installed in November 1916. The bridge approaches cost just over $1.53 million to construct. In an attempt to speed up automotive traffic, in 1918, the New York City Police Department banned horse-drawn vehicles from crossing the bridge toward Brooklyn during the morning rush hour and toward Manhattan during the evening rush hour. One of the two streetcar lines across the bridge was discontinued in 1919.
1920s to 1940s
During late 1920, the bridge's roadway was used as a reversible lane between 7 am and 7 pm each day; this restriction caused heavy congestion. Grover Whalen, the commissioner of Plant and Structures, announced that September that he would request funding to repaint the bridge. The span was repainted during the next year at a cost of $240,000. Meanwhile, the bridge was carrying 27,000 daily vehicles by the early 1920s, and one traffic judge said the lower deck was too narrow to accommodate the increasing traffic levels on the bridge. In March 1922, the city government started constructing the eastern upper-deck roadway at a cost of $300,000. The roadway opened that June. The next month, Whalen banned horse-drawn vehicles from the Manhattan Bridge and motor vehicles from the Brooklyn Bridge. The upper roadway of the Manhattan Bridge was converted to a reversible lane, while the lower roadway carried two-way traffic at all times. Whalen said the restriction would allow both levels to be used to their full capacity; the decision ended up placing additional loads on the bridge.
To reduce congestion at the Manhattan end, left-hand traffic was implemented on the lower level during the 1920s, as most vehicles heading into Manhattan turned left at the end of the bridge. Motorists continued to use the Manhattan Bridge even after the Brooklyn Bridge reopened to motorists in 1925, contributing to heavy congestion during rush hours. At the time, the Brooklyn Bridge carried 10,000 vehicles a day (in part due to its low speed limit), while the Manhattan Bridge carried 60,000 vehicles daily. When the lower level was repaved in early 1927, Manhattan-bound traffic was temporarily banned from the lower level at night. That October, Brooklyn borough president James J. Byrne proposed replacing the Three Cent Line's trolley tracks with a roadway; he estimated that it would cost $9 million to construct a brand-new roadway, while converting the trolley tracks would cost only $600,000. The comptroller approved the plan in September 1928, and the city formally voted to buy the Three Cent Line for just over $200,000 the following month. The Three Cent Line was discontinued in November 1929.
The Three Cent Line tracks were replaced by the western upper-deck roadway. Initially scheduled to be completed by July 1930, the roadway ultimately opened in June 1931 and carried Brooklyn-bound traffic. The eastern upper-deck roadway was converted to carry Manhattan-bound traffic, and the center roadway was turned into a lane for buses and trucks. At the time, nearly 65,000 vehicles used the bridge every day, of which nearly a quarter were buses and trucks. A set of 119 streetlights were installed on the upper level the following year. To increase traffic flow, both upper roadways were temporarily converted to reversible lanes during rush hours in 1934; the lower roadway was repaired, and the bridge was repainted the same year. The city's commissioner of plant and structures also requested $725,000 in federal funds for various repairs. During 1937, the city awarded a contract to repair the bridge's steelwork and raised the railings on the upper roadways. The city government announced in 1938 that it would replace the lower deck's wooden pavement with a steel-and-concrete pavement; the repaving was completed that December. Simultaneously, the railings on the upper roadways were raised again.
As part of a Works Progress Administration project, a ramp at the Brooklyn end of the bridge was widened in 1941, replacing a dangerous reverse curve. By then, 90,000 vehicles a day used the span. An air raid siren was also installed on the bridge during World War II. By the mid-1940s, the Brooklyn approach to the bridge was one of the most congested areas in New York City.
1950s to 1970s
The upper roadways were repaired during 1950. Similar repairs to the lower roadway were postponed until the Brooklyn–Battery Tunnel opened, as the Brooklyn Bridge was also being rebuilt around the same time. To ease congestion, the Manhattan Bridge's western upper roadway began carrying Manhattan-bound traffic during the morning in March 1950. Floodlights and barbed-wire fences were installed at the bases of the bridge's anchorages in 1951, during the Cold War, and the anchorages themselves were sealed to protect against sabotage. Manhattan-bound traffic stopped using the western upper roadway during the morning in August 1952. Instead, two of the three lower-level lanes began carrying Manhattan-bound traffic during the morning; previously, Manhattan-bound vehicles could use only one of the lower-level lanes at all times. By the mid-1950s, there were frequent car accidents on the Manhattan Bridge, which injured 411 people and killed nine people between 1953 and 1955 alone. In addition, the bridge carried nearly 79,000 cars, 18,000 trucks, and 200 buses on an average day.
1950s repair project
The city's public works commissioner, Frederick H. Zurmuhlen, requested in early 1952 that the Board of Estimate hire David B. Steinman to thoroughly examine the Manhattan Bridge, saying its maintenance costs were disproportionately higher than those of the other East River bridges. A beam on the eastern side of the bridge cracked in April 1953 and was fixed within a month. Following the cracked-beam incident, Zurmuhlen asked the city to allocate $2.69 million to repair the bridge, as trains disproportionately used one side of the bridge, causing it to tilt. Two proposals were put forth for the bridge's subway tracks; one plan called for them to be moved to the center of the deck, while another plan called for the construction of an entirely new tunnel for subway trains. The administration of mayor Robert F. Wagner tentatively approved a $30 million renovation of the bridge in July 1954, and a committee of engineers was hired to review alternate proposals for the bridge. Zurmuhlen said the bridge's safety would be compromised within the next decade if subway trains continued to use the bridge.
By February 1955, the city had hired a contractor to repair the Manhattan Bridge's cable bands and hangers for $2.2 million. Before these repairs could begin, engineers surveyed the bridge. When work on the cables began in June, access to the western upper roadway was severely reduced. That September, the eastern upper roadway was closed for repairs; the western upper roadway was used by Manhattan-bound traffic during weekday mornings and carried two-way traffic at other times. The bridge was temporarily closed to all traffic in November 1955. The eastern upper roadway was again closed during the midday in early 1956 for suspender cable repairs, and the whole span was closed during nights in June 1956. All lanes were again open by that August. The city had still not decided whether to move the subway tracks to a double-deck structure in the middle of the bridge, even though that plan would have reduced strain on the cables. For unknown reasons, the tracks were never moved.
Other modifications
Plans for the Brooklyn–Queens Expressway in Brooklyn, which was constructed in the 1950s, included ramps to the Manhattan Bridge. Lane control lights were installed above the bridge's reversible lower-level lanes in early 1958, and fixed red and green lights were installed on the upper-level roadways. The same year, the city spent $50,000 on repairs after two boats collided on the East River, causing an explosion that scorched the bridge. The city announced in 1959 that it would rebuild the upper roadways to accommodate trucks. The Karl Koch Engineering Company received a contract to rebuild the upper roadways; the project was planned to cost $6.377 million. The eastern upper roadway was closed for repairs in September 1960; the project also included fixing the lower deck and building ramps from the Brooklyn-Queens Expressway. After the eastern upper roadway reopened in November 1961, the western upper roadway was closed, and the eastern upper roadway was temporarily used as a reversible lane. Work proceeded several months ahead of schedule.
In conjunction with the upgrades to the upper roadways, in June 1961, New York City parks commissioner Robert Moses proposed demolishing the plazas on both sides and connecting the bridge to new expressways. The bridge would have linked the Lower Manhattan Expressway with the Brooklyn-Queens Expressway, though the former was never built. The city's art commission delayed the demolition of the plazas before ruling that the pylons in the Brooklyn plaza be relocated to the Brooklyn Museum or another suitable location. Wagner said in late 1962 that he would request $2.9 million to rebuild the approaches at both ends of the bridge; the work included a widening of an approach road at the bridge's Manhattan end. The pylons flanking the Brooklyn approach were moved to the Brooklyn Museum in 1963. The western upper roadway was closed for repairs for a year beginning in August 1969. Two of the lower roadway's lanes were closed for four months starting in November 1970 so workers could replace faulty joints.
In 1970, the federal government enacted the Clean Air Act, a series of federal air pollution regulations. As part of a plan by mayor John Lindsay and the federal Environmental Protection Agency, the city government considered implementing tolls on the four East River bridges, including the Manhattan Bridge, in the early 1970s. The plan would have raised money for New York City's transit system and allowed the city to meet the Clean Air Act. Abraham Beame, who became mayor in 1974, refused to implement the tolls, and the United States Congress subsequently moved to forbid tolls on the East River bridges. The United States Department of Transportation determined that the eastern upper roadway of the Manhattan Bridge was partially built with federal funds and, under federal law, could not be tolled.
Late 20th- and early 21st-century renovation
The weight of the subway trains had caused deep and widespread cracks to form in the bridge's floor beams, prompting the city government to replace 300 deteriorated beams during the late 1970s. The deck twisted up to every time a train passed by, and trains had to slow down on the bridge. A New York Times reporter wrote that diagonal cable stays might eventually need to be installed; the city government also contemplated installing support towers under the side spans. The bridge's condition was blamed on the imbalance in the number of trains crossing the bridge, as well as deferred maintenance during the New York City fiscal crisis of the 1970s. In 1979, the New York state government took over control of the Manhattan Bridge and the three other toll-free bridges across the East River. One engineer estimated in 1988 that the bridge would cost $162.6 million to repair.
Late 1970s and 1980s
The state government started inspecting the Manhattan Bridge and five others in 1978. The same year, the United States Congress voted to allocate money to repair the bridge, as well as several others in New York City. After the presidential administration of Ronald Reagan questioned whether the congressional funding should cover the subway tracks' restoration, the U.S. government agreed in 1981 to fund restoration both of the roadways and of the subway tracks. By the early 1980s, the New York City Department of Transportation (NYCDOT) planned to spend $100 million on bridge repairs. The New York City government allocated $10.1 million for preliminary work on the bridge in March 1982, and minor repair work started that year. Workers planned to install brackets and supports under the deck, and they drilled small holes into the lower-level floor beams unsuccessful attempt to prevent the beams from cracking further. An overhaul of the bridge began in April 1985, and the city received $60 million in federal funds for the renovations of the Queensboro, Manhattan, and Brooklyn bridges the same year. The north tracks were closed that August, reopening that November after an $8.1 million repair.
The eastern upper roadway was temporarily closed starting in April 1986, and all northbound traffic was shifted to the lower level, as part of a $45 million project to replace the roadway and its steel supports. The north tracks underneath were closed that month. The roadway was originally supposed to reopen within 15 months, but contractors found that one of the anchors for the main cables was far more corroded than anticipated, delaying the eastern roadway's reopening by another 18 months. The renovation of the Manhattan Bridge was behind schedule by the end of 1986, in part because of the corrosion. Legal issues, traffic reroutes, and concerns about the capabilities of the main contractor were also cited as causes for the delays in the renovation. Inspectors subsequently found that twenty of the girders below the lower deck had cracks as much as wide. Due to the cracks on the lower level, in December 1987, inspectors shut one lane of the lower level and banned buses and trucks from the two remaining lower-level lanes. The city government agreed to pay $750,000 to fix the cracks.
In 1988, the NYCDOT published a list of 17 structurally deficient bridges in the city, including the Manhattan Bridge. That year, inspectors identified 73 "flags" or potentially serious defects, compared to the five defects identified in a 1978 inspection. As a result, the general contractor was ousted in August 1988, and the New York State Department of Transportation had to hire another contractor, increasing the project's cost. The eastern roadway of the Manhattan Bridge reopened in December 1988; the north tracks also reopened at that time, and the south tracks were closed. Although the NYCDOT had planned to halt work for 16 months, the western roadway was closed for emergency repairs in February 1989 after two corroded beams sagged. Newsday reported that the western roadway had urgently required repair for almost three years but had remained open to avoid shutting down all four of the bridge's subway tracks at once. The cables, trusses, and subway frame on the eastern half of the bridge had to be repaired, and the lower roadway needed complete replacement. After seven columns supporting the Brooklyn approach were found to be cracked or corroded, these columns were repaired in late 1989.
1990s
By the end of 1990, engineers found that the bridge's support beams had thousands of cracks. Service on the south tracks resumed in December 1990, despite warnings the structure was unsafe; they had to be closed again after the discovery of corroded support beams and missing steel plates. The north-side tracks also had to be closed periodically to repair cracks. In the aftermath of the dispute, two city officials were fired, and the New York City Council's Transportation committee held inquiries on the reopening of the south tracks and the safety of all New York City bridges. They found that the NYCDOT and MTA's lack of cooperation contributed significantly to the deteriorating conditions. There were also allegations that the NYCDOT's transportation commissioner was not properly addressing concerns about the bridge's safety. Starting in January 1991, trucks and buses were banned from the lower roadway, which was also closed for repairs during nights and weekends. Meanwhile, the weight of heavy trucks created holes in the upper roadbed, so a three-ton weight limit was imposed.
The NYCDOT selected the Yonkers Contracting Company as the bridge's main contractor in early 1992, and the firm was awarded a $97.8 million contract that August. City Comptroller Elizabeth Holtzman originally denied the contract to the company because of concerns about corruption, but she was overridden by Mayor David Dinkins, who wanted to complete repairs quickly. The NYCDOT began conducting more frequent inspections of the bridge after inspectors found holes in beams that had been deemed structurally sound during previous inspections. A shantytown at the Manhattan end of the bridge became one of the city's largest homeless encampments before it was razed in 1993. The western upper roadway was closed for reconstruction that year. As part of an experiment, researchers from Mount Sinai Hospital monitored lead levels in Manhattan Bridge workers' blood while the reconstruction took place.
The bridge repairs were repeatedly delayed as the renovation process uncovered more serious structural problems underlying the bridge. The original plans had been to complete the renovations by 1995 for $150 million, but by 1996, the renovation was slated to be complete in 2003 at a cost of $452 million. The western upper roadway did not reopen until 1996.
2000s
By 2001, it was estimated that the renovations had cost $500 million to date, including $260 million for the west side and another $175 million for the east side. At the time, the NYCDOT had set a January 2004 deadline for the renovation. The eastern upper roadway was closed for a renovation starting in 2002. The original pedestrian walkway on the west side of the bridge was reopened in June 2001, having been closed for 20 years. It was shared with bicycles until late summer 2004, when a dedicated bicycle path was opened on the east side of the bridge. The bike path was poorly signed, leading to cyclist and pedestrian conflicts. By the time work on the bridge was completed in 2004, the final cost of the renovation totaled $800 million. The lower-level roadway was then renovated between 2004 and 2008.
The arch and colonnade had also become deteriorated, having become covered with graffiti and dirt. The enclosed plaza within the colonnade had been used as a parking lot by the New York City Police Department, while the only remaining portion of the large park surrounding the arch and colonnade, at Canal and Forsyth Streets, had accumulated trees. The arch and colonnade themselves had open joints in the stonework, as well as weeds, bushes, and small trees growing at their top. The arch and colonnade were restored starting in the late 1990s, with the restoration being completed in April 2001 for $11 million. The project entailed cleaning the structures and installing 258 floodlights.
Late 2000s to present
To celebrate the bridge's centennial, a series of events and exhibits were organized by the New York City Bridge Centennial Commission in October 2009. These included a ceremonial parade across the Manhattan Bridge on the morning of October 4 and a fireworks display in the evening. In 2009, the bridge was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers. An $834 million project to replace the Manhattan Bridge's suspension cables was announced in 2010. The work was scheduled to take two years.
The lower roadway was permanently reconfigured in July 2015 to carry traffic toward Manhattan only; prior to this change, the lower roadway carried traffic toward Brooklyn for six hours every afternoon. The same year, the NYCDOT began allowing Brooklyn-bound drivers to exit onto Concord Street in Brooklyn at all times; previously, drivers could only exit onto Concord Street during the afternoon rush hours. The Concord Street exit was again closed outside the afternoon rush hour in early 2016. After rubble was found in Brooklyn Bridge Park under the Brooklyn approach in 2018, Skanska was given a contract to repair parts of the bridge at a cost of $75.9 million. The renovation was scheduled to finish in early 2021. The work entailed replacing some fencing, installing some new steel beams on the spans, and refurbishing ornamental elements on the towers. For instance, the spherical finials atop the suspension towers were replaced with cast-iron copies.
A plan for congestion pricing in New York City was approved in mid-2023, allowing the Metropolitan Transportation Authority to toll drivers who enter Manhattan south of 60th Street. Congestion pricing was implemented in January 2025; all Manhattan-bound drivers pay a toll after using the bridge, which varies based on the time of day.
Description
The bridge, including approaches but excluding plazas, is about long. The bridge reaches a maximum height of above mean high water at the middle of the river. The main span between the two suspension towers is long. The side spans, between the anchorages and the suspension towers on either side, are long. When the bridge was built, the Manhattan approach and plaza were quoted as being long, while the Brooklyn approach and plaza were quoted as measuring long. The bridge's dead load is , and its live load is .
The bridge was designed by Leon Moisseiff. The plans for Manhattan Bridge are sometimes mistakenly attributed to Gustav Lindenthal, who was the city's bridge commissioner before he was fired in 1904. The steel was fabricated by the Phoenix Bridge Company.
Deck
The Manhattan Bridge has four vehicle lanes on the upper level, split between two roadways carrying opposite directions of traffic. The southbound roadway to Brooklyn is on the west side of the bridge, while the northbound roadway to Manhattan is on the east side. The lower level has three Manhattan-bound vehicle lanes (formerly reversible until 2015) and four rapid transit/subway tracks, two under each of the upper roadways. Also on the lower level are a walkway on the south (geographically facing west) and a bikeway on the north side (geographically facing east). Originally, there were four streetcar tracks above the four rapid transit tracks. Although both levels could theoretically have accommodated either streetcars or elevated rapid transit, subways could use only the lower level because subway trains would have needed to climb an excessively steep slope to reach the upper level.
The deck is wide. As designed, the lower-level roadway was or wide. The walkway and bikeway were each or wide. The Manhattan-bound (eastern) upper-level roadway is wide, while the Brooklyn-bound (western) roadway is wide; both roadways narrow to at the anchorages. At the Brooklyn end of the south pathway, a staircase leads to the intersection of Jay and High streets. Because the subway trains are on the outer edges of the deck, this causes torsional stresses every time a train crossed the bridge. As built, the bridge sagged by as much as when a train crossed it, and it took about 30 seconds for the deck to return to its normal position after a train had passed. The floor beams under the lower level are thick.
The Manhattan Bridge was the first suspension bridge to employ Josef Melan's deflection theory for deck stiffening. The theory posited that the weight of a suspension bridge's deck, and the downward forces created by vehicles on the bridge, provided stability to the bridge's deck; thus, such a bridge could use lighter trusses. As such, the Manhattan Bridge was the first suspension bridge in the world to use a lightly-webbed weight-saving Warren truss. There are four stiffening trusses, two each flanking the tracks on the north and south sides of the bridge; these trusses measure or deep. Each of the trusses is directly beneath one of the main cables. The centerlines of the inner trusses are apart from each other, while the centerline of each of the outer trusses is spaced from the centerlines of the inner trusses. The bottom of each truss is connected to the steel beams under the lower level, while the top of each truss supports the upper-level roadways. The trusses distribute the bridge's weight between each vertical suspender cable.
Towers
The Manhattan Bridge's suspension towers measure from the mean high water mark to the tops of the cables; the ornamental finials atop each tower are above high water. Each tower sits on a masonry pier that measures across and projects above mean high water. The tops of each pier taper to a steel pedestal measuring , from which rise the columns of each tower. The foundations of each tower, consisting of the underwater section of each pier and a caisson below it, descend below mean high water. The caissons measure across. They have concrete walls and contained a working chamber divided into three compartments.
Each tower is made of of steel, much heavier than the towers of similar suspension bridges. The towers are composed of four columns oriented transversely (perpendicularly) to the deck, one each flanking the north and south roadways. The columns measure wide, as measured transversely. The length of each column, as measured laterally, tapers from at the pedestal to at the top. The columns are braced by diagonal steel beams. A publication from 1904 wrote that the central parts of each tower were designed like a "great open arch", making it possible to rebuild either the western or eastern halves of the bridge without affecting the structural integrity of the other half.
The towers contain little decorative detail, except for spherical finials. Each suspension tower contains an iron and copper hood over the pedestrian or bike path on either side, as well as iron cornices just below the tops of the towers. Saddles carry the main cables above the tops of each suspension tower. Each saddle weighs . In contrast to the Williamsburg and Brooklyn bridges (where the saddles are placed on movable rollers), the saddles are fixed in place, as the towers themselves were intended to flex slightly to accommodate the strains placed on each cable. If the bridge was loaded to full capacity, the tops of the towers could bend up to toward the center of the river. The steel beams also expanded by up to just sitting in the sun.
Cables
The Manhattan Bridge contains four main cables, which measure long. They descend from the tops of the suspension towers and help support the deck. The cables weigh a combined and can carry including the weight of the cables themselves. The cables measure either , , or in diameter. Unlike the Williamsburg Bridge (but like other suspension bridges), the wires on the Manhattan Bridge's cables are galvanized to prevent rusting. Each cable consists of 9,472 parallel wires, which are grouped into 37 strands of 256 wires. The wires measure across. The cables themselves are capable of resisting loads of up to .
There are 1,400 vertical suspender cables, which hang from the main cables and hold up the deck. They measure about in diameter and weigh a total of . Each suspender can withstand up to of pressure. There are cable bands at the top of each suspender cable (where they attach to the main cable); the suspenders are attached to the main cables using clamps. The lower parts of the suspender cables pass through the trusses. To reduce chafing on the lower parts of the suspender cables, workers installed wooden buffers between the suspender cables and the trusses after the bridge was completed.
Anchorages
The cables are attached to stone anchorages on each side, measuring long, wide, and tall. Each anchorage weighs and is filled with of concrete and rubble masonry. Inside the anchorages are 36 anchor bars, nine for each cable. The ends of each strand are attached to the anchor bars, which in turn are attached to eyebars measuring long. There are 37 eyebars connecting each cable to the anchor bars, distributing the loads on the cables across a larger area.
The anchorages were intentionally wider than the deck, providing space for pedestrians to rest; these pedestrian areas are above the ground. The facade of each anchorage is made of concrete and is topped by a colonnade measuring long. Each colonnade is divided vertically into five bays. The arches and colonnades are the only decorative elements of each anchorage. Early proposals for the anchorages called for them to include auditoriums, but this proposal was never executed. In a 1909 article for the Architectural Record, architectural critic Montgomery Schuyler described the anchorages as having "an aspect of Egyptian immobility", and another author in 2006 similarly compared the anchorages to "vast, battered, Egyptian masses".
At the base of the Manhattan anchorage is an arch measuring wide and 46 feet tall, through which Cherry Street passes. Water Street passes through a similar arch in Brooklyn. The intersection of Adams and Water streets had to be relocated to make way for the Brooklyn anchorage. The archway under the Brooklyn anchorage contains a public plaza. The sides of the anchorages have large buttresses that slope upward.
Approach plazas
Carrère and Hastings designed approach plazas on both ends of the bridge. At the time of the bridge's opening, these plazas were meant to conceal views of the Manhattan Bridge from the streets on either end. The Manhattan plaza connects directly with Canal Street and the Bowery, while the Brooklyn end of the bridge continues as Flatbush Avenue (which in turn intersects several other roads at Prospect Park). The city paid Carl Augustus Heber, Charles Cary Rumsey, and Daniel Chester French a combined $41,000 () to design sculptures around the approach plazas.
Manhattan plaza
In Manhattan, the bridge terminates at a plaza originally bounded by the Bowery and Bayard, Division, Forsyth, and Canal streets. This plot covered . The arch and colonnade were completed within this plaza in 1915; they surround an elliptical plaza facing northwest toward the Bowery.
Design
The arch and colonnade are made of white, fine-grained Hallowell granite. They are decorated with two groups of allegorical sculptures by Heber and a frieze called "Buffalo Hunt" by Rumsey. The design of the arch and colonnade reference the fact that the Manhattan Bridge continues into Brooklyn as Flatbush Avenue, which runs south to the Atlantic Ocean. The arch thus signified the Manhattan Bridge's role as an ocean "gateway". The plaza was influenced by the New York Improvement Plan of 1907, which sought to create plazas and other open spaces at large intersections; a massive circular plaza, connecting the Brooklyn and Manhattan bridges, was never built.
The arch was based on Paris's Porte Saint-Denis. It is one of the city's three remaining triumphal arches, the others being the Washington Square Arch and the Soldiers' and Sailors' Arch. The arch's opening measures high and wide. On the northern side of the arch, the opening is flanked by carvings of classical ships, masks, shields, and oak leaves. The western pier contains the sculptural group Spirit of Commerce, depicting a winged woman flanked by two figures. The eastern pier contains Spirit of Industry, depicting the god Mercury flanked by two figures. The arch's keystone contains a depiction of a bison. Above is the "Buffalo Hunt" frieze, which depicts Native Americans hunting animals while on horseback. The relief is topped by dentils and egg-and-dart ornamentation. The cornice of the arch contains modillions as well as six lion heads. The interior of the arch contains a coffered ceiling. There are rosettes on the arch's soffit. The southern side of the arch, facing Brooklyn, is less ornately decorated but has rusticated stone blocks indicative of a Parisian or Florentine bank. On the southern side, there are decorations of carved lions at the bases of each pier.
The colonnade and plaza was modeled after the one surrounding St. Peter's Square in Vatican City. The colonnade is elliptical and rises to . It is supported by six pairs of Tuscan columns on either side, with each pair of columns flanking rusticated piers inside the colonnade. Above each column is a stone with a classical motif, such as a boat or a cuirass. There is an entablature above the columns, as well as a cornice and balustrade at the top of the colonnade. The entablature contains roundels with floral motifs. The arch and colonnade were initially surrounded by granite retaining walls that contained decorative balustrades surrounding parkland on either side of the arch and colonnade. Only a small segment of parkland remains at Canal and Forsyth streets, while the south side of the park became Confucius Plaza.
Reception and modifications
American Architect and Architecture described the arch and colonnade in 1912 as "worthy of one of the principal gateways of a great modern city". The arch and colonnade were described as a "complete, dignified and monumental ensemble, worthy of one of the principal gateways of a great modern city" in a New York Times article. The Brooklyn Daily Eagle wrote: "The Manhattan Bridge will be not only something to get across the East River upon, but the sight of it will be a joy even to those who have no occasion to cross it." According to The Christian Science Monitor, the plaza's presence "has turned a section of the East Side, in one of its most squalid parts, into a veritable park where children can find on summer evenings a clean open place amid surroundings that will be in many ways the equal of any in New York".
From the bridge's completion, the arch was heavily used by vehicular traffic. When the second Madison Square Garden was being demolished in 1925, there was a proposal to relocate the arena's statue of Diana to the arch, but this did not happen. Part of the colonnade's eastern arm was removed and replaced in the 1970s for the construction of the incomplete Second Avenue Subway. The arch and colonnade were designated a New York City landmark on November 25, 1975. After many years of neglect and several attempts by traffic engineers to remove the structure (including a proposal for the unbuilt Lower Manhattan Expressway that would have required removing the arch), the arch and colonnade were repaired and restored in 2000.
Brooklyn plaza
The Brooklyn approach to the Manhattan Bridge also contained a terraced plaza with balustrades. The Brooklyn plaza was originally bounded by Sands, Bridge, Nassau, and Jay streets. French designed a pair of pylons named Brooklyn and Manhattan on the Brooklyn side of the Manhattan Bridge. These were installed in November 1916. Each pylon measured high and rested on a base off the ground. The statues on each pylon represented French's impressions of life in each borough. The Brooklyn pylon depicted a young woman with a child and symbols of art and progress, while the Manhattan pylon depicted a seated, upright woman with symbols of art and prosperity. There were granite railings and walkways at the base of either pylon.
A bas-relief memorializing former mayor William Jay Gaynor was dedicated at the bridge's Brooklyn plaza in 1927; it was relocated in 1939 to the nearby Brooklyn Bridge Plaza. The pylons were relocated to the Brooklyn Museum in 1963. The pylons never constituted a true portal, even when they were in place. Following their removal, the Brooklyn approach did not contain a formal entrance.
Exit list
Access to the Manhattan Bridge is provided by a series of ramps on both the Manhattan and Brooklyn sides of the river.
Proposed I-478 designation
As early as the 1940s, there had been plans for an expressway running across Manhattan, connecting with the bridge. As part of the Interstate Highway System, the I-478 route number was proposed in 1958 for a branch of the Lower Manhattan Expressway running along the Manhattan Bridge. This highway would have run between I-78 (which would have split to another branch that used the Williamsburg Bridge) and I-278. The state government solicited bids for a ramp connecting the expressway to the bridge's Manhattan end in 1965. The Lower Manhattan Expressway project was canceled in March 1971, and the I-478 designation was applied to the Brooklyn–Battery Tunnel. A fragment of the never-built expressway's onramp still exists above the Manhattan side of the bridge's center roadway.
Public transportation
The bridge was originally intended to carry four Brooklyn Rapid Transit Company (BRT; later Brooklyn–Manhattan Transit Corporation or BMT) subway tracks on the lower level, as well as four trolley tracks on the upper level. The trolley tracks were carried around the Manhattan side's colonnade, while the subway tracks did not emerge from street level until south of the colonnade.
Streetcar and bus service
Before the bridge opened, the BRT and the Coney Island and Brooklyn Rail Road (CI&B) both submitted bids to run streetcar service on the bridge, as did the Triborough Railroad Company. The Manhattan Bridge Three Cent Line received a permit to operate across the bridge in July 1910, despite opposition from the BRT and CI&B. The Three Cent Line still had not begun operating by 1911, when another firm, the Manhattan Bridge Service Company, applied for a franchise to operate streetcars across the bridge. After a subcommittee of the Board of Estimate recommended that the Brooklyn and North River Line receive a franchise, both the Three Cent Line and the Brooklyn and North River Line received franchises to operate across the bridge in mid-1912.
Due to disputes over the franchises, the Three Cent Line did not run across the bridge until September 1912; it carried passengers between either of the bridge's terminals. The Brooklyn and North River Line began operating in December 1915, and a bus route started running across the bridge after the Brooklyn and North River Railroad Company stopped operating streetcars across the bridge in October 1919. The Three Cent Line trolley was discontinued in November 1929 and replaced by a bus. A tour bus service, Culture Bus Loop II, began running across the bridge in 1973 and was discontinued in 1982. The B51 bus began running across the bridge in 1985 as part of a pilot program; the route was discontinued in 2010. , no MTA Regional Bus Operations routes use the bridge.
Subway service
Four subway tracks are located on the lower deck of the bridge, two on each side of the lower roadway. The two tracks on the west side of the bridge (known as the south tracks) are used by the Q train at all times and the N train at all times except late nights, when it uses the Montague Street Tunnel. The tracks on the east side (known as the north tracks) are used by the D train at all times and the B train on weekdays. For both pairs of tracks, the western track carries southbound trains, and the eastern track carries northbound trains. On the Manhattan side, the south tracks connect to Canal Street and become the express tracks of the BMT Broadway Line, while the north tracks connect to the Chrystie Street Connection through Grand Street and become the express tracks on the IND Sixth Avenue Line. On the Brooklyn side, the two pairs merge under Flatbush Avenue to a large junction with the BMT Fourth Avenue Line and BMT Brighton Line at DeKalb Avenue.
In Brooklyn, the tracks have always connected to the BMT Fourth Avenue Line and the BMT Brighton Line; the junction between the lines was reconstructed in the 1950s. On the Manhattan side, the two north tracks originally connected to the BMT Broadway Line (where the south tracks now connect) while the two south tracks curved south to join the BMT Nassau Street Line towards Chambers Street. As a result of the Chrystie Street Connection, which linked the north tracks to the Sixth Avenue Line upon completion in 1967, the Nassau Street connection was severed. There were also unbuilt plans in the 1960s to have Long Island Rail Road trains use the subway tracks.
Trackage history
The New York City Rapid Transit Commission recommended the construction of a subway line across the Manhattan Bridge in 1905, and this line was approved in 1907 as part of the Nassau Street Loop. Unsuccessful proposals for rapid transit across the bridge included a two-track line for the Interborough Rapid Transit Company and a two-track extension of a four-track BRT elevated line. The New York City Public Service Commission requested permission to start constructing the subway tracks in March 1909. Amid financial difficulties, and uncertainty over what subway lines would connect to the bridge in Brooklyn, the subway tracks were approved in May 1909. The subway tracks on the Manhattan Bridge opened on June 22, 1915, along with the Fourth Avenue Line and the Sea Beach Line. Initially, the north tracks carried trains to Midtown Manhattan via the Broadway Line, while the south tracks carried Sea Beach trains that terminated at Chambers Street.
When the Nassau Street Loop was completed on May 29, 1931, service on the south tracks declined, and traffic disproportionately used the north tracks. Trains from the Sea Beach, Brighton, and West End lines used the north tracks, while the south tracks were used only by short-turning trains from the West End and Culver lines. Approximately three times as many trains were using the north tracks than the south tracks by 1953, and 92 percent of subway trains used the north tracks by 1956. The Chrystie Street Connection opened on November 26, 1967, with a connection to the north tracks; the south tracks were rerouted to the Broadway Line, while the Nassau Line was disconnected from the bridge.
Repairs to the tracks commenced in August 1983, requiring closures of some subway tracks for three months. Further repairs occurred in 1985. The north tracks were closed for a longer-term repair in April 1986. The north tracks were reopened and the south tracks were closed simultaneously in December 1988. A projection for a reopening date was initially made for 1995. That year, the north tracks were closed during off-peak hours for six months. The south tracks finally reopened on July 22, 2001, whereby the north tracks were again closed. The south tracks was closed on weekends from April to November 2003. On February 22, 2004, the north tracks were reopened.
Tracks used
1967–1986
1986–1988: North tracks closed
1988–2001: South tracks closed
2001–2005: North tracks closed
2013–2014: Montague Street Tunnel closed
2005–2013, 2014-present
Impact
When the Manhattan Bridge was being developed, the Brooklyn Standard Union described it as "of greater capacity than either the Williamsburg or Brooklyn bridges, yet lighter and more artistic". The Brooklyn Daily Eagle predicted that the bridge's completion would spur the redevelopment of residential areas in Downtown Brooklyn, and the New-York Tribune said that warehouses would be developed in Lower Manhattan when the Manhattan Bridge opened. The bridge's opening significantly reduced patronage on several ferry lines that had traveled between Lower Manhattan and Downtown Brooklyn.
One local civic group predicted that large numbers of Jewish residents would move from Manhattan's Lower East Side to Brooklyn as a result of the bridge's opening. These included numerous Jewish families displaced by the bridge's construction. In addition, numerous industrial and factory buildings were built around the bridge's Brooklyn approach in the 1910s. Some of the land under the bridge's approaches was leased out; for example, a Chinese theater was built under the Manhattan approach in the 1940s, and a shopping mall was built there in the 1980s. The area under the Brooklyn approach became known as Dumbo, short for "Down Under the Manhattan Bridge Overpass", in the late 20th century and became an upscale residential neighborhood by the 2010s.
In the two decades following the Manhattan Bridge's completion, few bridges with longer spans were constructed. Nonetheless, the use of the deflection theory enabled the construction of longer suspension bridges in the early 20th century. Two of the world's largest suspension bridge spans built in the 1930s, the Golden Gate Bridge and the George Washington Bridge, incorporated deflection theory into their designs.
The bridge was the subject of American artist Edward Hopper's 1928 painting Manhattan Bridge Loop.
Gallery
| Technology | Bridges | null |
384886 | https://en.wikipedia.org/wiki/Operating%20theater | Operating theater | An operating theater (also known as an Operating Room (OR), operating suite, operation suite, or Operation Theatre (OT)) is a facility within a hospital where surgical operations are carried out in an aseptic environment.
Historically, the term "operating theater" referred to a non-sterile, tiered theater or amphitheater in which students and other spectators could watch surgeons perform surgery. Contemporary operating rooms are usually devoid of a theater setting, making the term "operating theater" a misnomer in those cases.
Classification of operation theatre
Operating rooms are spacious, in a cleanroom, and well-lit, typically with overhead surgical lights, and may have viewing screens and monitors. Operating rooms are generally windowless, though windows are becoming more prevalent in newly built theaters to provide clinical teams with natural light, and feature controlled temperature and humidity. Special air handlers filter the air and maintain a slightly elevated pressure. Electricity support has backup systems in case of a black-out. Rooms are supplied with wall suction, oxygen, and possibly other anesthetic gases. Key equipment consists of the operating table and the anesthesia cart. In addition, there are tables to set up instruments. There is storage space for common surgical supplies. There are containers for disposables. Outside the operating room, or sometimes integrated within, is a dedicated scrubbing area that is used by surgeons, anesthetists, ODPs (operating department practitioners), and nurses prior to surgery. An operating room will have a map to enable the terminal cleaner to realign the operating table and equipment to the desired layout during cleaning. Operating rooms are typically supported by an anaesthetic room, prep room, scrub and a dirty utility room.
Several operating rooms are part of the operating suite that forms a distinct section within a health-care facility. Besides the operating rooms and their wash rooms, it contains rooms for personnel to change, wash, and rest, preparation and recovery rooms, storage and cleaning facilities, offices, dedicated corridors, and possibly other supportive units. In larger facilities, the operating suite is climate- and air-controlled, and separated from other departments so that only authorized personnel have access.
Operating room equipment
The operating table in the center of the room can be raised, lowered, and tilted in any direction.
The operating room lights are over the table to provide bright light, without shadows, during surgery.
The anesthesia machine is at the head of the operating table. This machine has tubes that connect to the patient to assist them in breathing during surgery, and built-in monitors that help control the mixture of gases in the breathing circuit.
The anesthesia cart is next to the anesthesia machine. It contains the medications, equipment, and other supplies that the anesthesiologist may need.
Sterile instruments to be used during surgery are arranged on a stainless steel table.
An electronic monitor (which records the heart rate and respiratory rate by adhesive patches that are placed on the patient's chest).
The pulse oximeter machine attaches to the patient's finger with an elastic band aid. It measures the amount of oxygen contained in the blood.
Automated blood pressure measuring machine that automatically inflates the blood pressure cuff on a patient's arm.
An electrocautery machine uses high frequency electrical signals to cauterize or seal off blood vessels and may also be used to cut through tissue with a minimal amount of bleeding.
If surgery requires, a heart-lung machine or other specialized equipment may be brought into the room.
Supplementary portable air decontaminating equipment is sometimes placed in the OR.
Advances in technology now support hybrid operating rooms, which integrate diagnostic imaging systems such as MRI and cardiac catheterization into the operating room to assist surgeons in specialized neurological and cardiac procedures.
Surgeon and assistants' equipment
People in the operating room wear PPE (personal protective equipment) to help prevent bacteria from infecting the surgical incision. This PPE includes the following:
A protective cap covering their hair
Masks over their lower face, covering their mouths and noses with minimal gaps to prevent inhalation of plume or airborne microbes
Shades or glasses over their eyes, including specialized colored glasses for use with different lasers. a fiber-optic headlight may be attached for greater visibility
Sterile gloves; usually latex-free due to latex sensitivity which affects some health care workers and patients
Long gowns, with the bottom of the gown no closer than six inches to the ground.
Protective covers on their shoes
If x-rays are expected to be used, lead aprons/neck covers are used to prevent overexposure to radiation
The surgeon may also wear special glasses to help them see more clearly. The circulating nurse and anesthesiologist will not wear a gown in the OR because they are not a part of the sterile team. They must keep a distance of 12–16 inches from any sterile object, person, or field.
History
Early Modern operating theaters in an educational setting had raised tables or chairs at the center for performing operations surrounded by steep tiers of standing stalls for students and other spectators to observe the case in progress. The surgeons wore street clothes with an apron to protect them from blood stains, and they operated bare-handed with unsterilized instruments and supplies.
The University of Padua began teaching medicine in 1222. It played a leading role in the identification and treatment of diseases and ailments, specializing in autopsies and the inner workings of the body.
In 1884 German surgeon Gustav Neuber implemented a comprehensive set of restrictions to ensure sterilization and aseptic operating conditions through the use of gowns, caps, and shoe covers, all of which were cleansed in his newly invented autoclave. In 1885 he designed and built a private hospital in the woods where the walls, floors and hands, arms and faces of staff were washed with mercuric chloride, instruments were made with flat surfaces and the shelving was easy-to-clean glass. Neuber also introduced separate operating theaters for infected and uninfected patients and the use of heated and filtered air in the theater to eliminate germs. In 1890 surgical gloves were introduced to the practice of medicine by William Halsted. Aseptic surgery was pioneered in the United States by Charles McBurney.
Surviving operating theaters
The oldest surviving operating theater is thought to be the 1804 operating theater of the Pennsylvania Hospital in Philadelphia. The 1821 Ether Dome of the Massachusetts General Hospital is still in use as a lecture hall. Another surviving operating theater is the Old Operating Theatre in London. Built in 1822, it is now a museum of surgical history. The Anatomical Theater at the University of Padua, in Italy, inside Palazzo Bo was constructed and used as a lecture hall for medical students who observed the dissection of corpses, not surgical operations. It was commissioned by the anatomist Girolamo Fabrizio d'Acquapendente in 1595.
| Biology and health sciences | Health facilities | Health |
384968 | https://en.wikipedia.org/wiki/Asiatic%20linsang | Asiatic linsang | The Asiatic linsang (Prionodon) is a genus comprising two species native to Southeast Asia: the banded linsang (Prionodon linsang) and the spotted linsang (Prionodon pardicolor). Prionodon is considered a sister taxon of the Felidae.
Characteristics
The coat pattern of the Asiatic linsang is distinct, consisting of large spots that sometimes coalesce into broad bands on the sides of the body; the tail is banded transversely. It is small in size with a head and body length ranging from and a long tail. The tail is nearly as long as the head and body, and about five or six times as long as the hind foot. The head is elongated with a narrow muzzle, rhinarium evenly convex above, with wide internarial septum, shallow infranarial portion, and philtrum narrow and grooved, the groove extending only about to the level of the lower edge of the nostrils. The delicate skull is long, low, and narrow with a well defined occipital and a strong crest, but there is no complete sagittal crest. The teeth also are more highly specialized, and show an approach to those of Felidae, although more primitive. The dental formula is . The incisors form a transverse, not a curved, line; the first three upper and the four lower pre-molars are compressed and trenchant with a high, sharp, median cusp and small subsidiary cusps in front and behind it. The upper carnassial has a small inner lobe set far forwards, a small cusp in front of the main compressed, high, pointed cusp, and a compressed, blade-like posterior cusp; the upper molar is triangular, transversely set, much smaller than the upper carnassial, and much wider than it is long, so that the upper carnassial is nearly at the posterior end of the upper cheek-teeth as in Felidae.
Systematics
Taxonomic history
With Viverridae (morphological)
Prionodon was denominated and first described by Thomas Horsfield in 1822, based on a linsang from Java. He had placed the linsang under ‘section Prionodontidae’ of the genus Felis, because of similarities to both genera Viverra and Felis. In 1864, John Edward Gray placed the genera Prionodon and Poiana in the tribe Prionodontina, as part of Viverridae. Reginald Innes Pocock initially followed Gray's classification, but the existence of scent glands in Poiana induced him provisionally to regard the latter as a specialized form of Genetta, its likeness to Prionodon being possibly adaptive. Furthermore, the skeletal anatomy of Asiatic linsangs are said to be a mosaic of features of other viverrine-like mammals, as linsangs share cranial, postcranial and dental similarities with falanoucs, African palm civet, and oyans respectively.
With Felidae (molecular)
DNA analysis based on 29 species of Carnivora, comprising 13 species of Viverrinae and three species representing Paradoxurus, Paguma and Hemigalinae, confirmed Pocock's assumption that the African linsang Poiana represents the sister-group of the genus Genetta. The placement of Prionodon as the sister-group of the family Felidae is strongly supported, and it was proposed that the Asiatic linsangs be placed in the monogeneric family Prionodontidae. There is a physical synapomorphy shared between felids and Prionodon in the presence of the specialized fused sacral vertebrae.
| Biology and health sciences | Other carnivora | Animals |
385156 | https://en.wikipedia.org/wiki/Leopard%20seal | Leopard seal | The leopard seal (Hydrurga leptonyx), also referred to as the sea leopard, is the second largest species of seal in the Antarctic (after the southern elephant seal). Its only natural predator is the orca. It feeds on a wide range of prey including cephalopods, other pinnipeds, krill, fish, and birds, particularly penguins. It is the only species in the genus Hydrurga. Its closest relatives are the Ross seal, the crabeater seal and the Weddell seal, which together are known as the tribe of Lobodontini seals. The name hydrurga means "water worker" and leptonyx is the Greek for "thin-clawed".
Taxonomy
French zoologist Henri Marie Ducrotay de Blainville described the leopard seal in 1820.
Description
The leopard seal has a distinctively long and muscular body shape when compared to other seals. The overall length of adults is and their weight is in the range , making them the same length as the northern walrus but usually less than half the weight. Females are larger than males by up to 50%.
It is perhaps best known for its massive jaws, which allow it to be one of the top predators in its environment. The front teeth are sharp like those of other carnivores, but their molars lock together in a way that allows them to sieve krill from the water in the manner of the crabeater seal. The coat is counter-shaded with a silver to dark gray blend and a distinctive spotted "leopard" coloration pattern dorsally and a paler, white to light gray color ventrally. The whiskers are short and clear.
As "true" seals, they do not have external ears or pinnae, but possess an internal ear canal that leads to an external opening. Their hearing in air is similar to that of a human, but scientists have noted that leopard seals use their ears in conjunction with their whiskers to track prey under water.
Distribution
Leopard seals are pagophilic ("ice-loving") seals, which primarily inhabit the Antarctic pack ice between 50˚S and 80˚S. Sightings of vagrant leopard seals have been recorded on the coasts of Australia, New Zealand (where individuals have been seen even on the foreshores of major cities such as Auckland, Dunedin and Wellington), South America, and South Africa. Fossil evidence suggests that leopard seals inhabited South Africa during the Late Pleistocene. In August 2018, an individual was sighted at Geraldton, on the west coast of Australia. Higher densities of leopard seals are seen in West Antarctica than in other regions.
Most leopard seals remain within the pack ice throughout the year and remain solitary during most of their lives with the exception of a mother and her newborn pup. These matrilineal groups can move further north in the austral winter to sub-antarctic islands and the coastlines of the southern continents to provide care for their pups. While solitary animals may appear in areas of lower latitudes, females rarely breed there. Some researchers believe this is due to safety concerns for the pups. Lone male leopard seals hunt other marine mammals and penguins in the pack ice of antarctic waters. The estimated population of this species ranges from 220,000 to 440,000 individuals, putting leopard seals at "least concern". Although there is an abundance of leopard seals in the Antarctic, they are difficult to survey by traditional audiovisual techniques as they spend long periods of time vocalizing under the water’s surface during the austral spring and summer, when audiovisual surveys are carried out. This habit of submarine vocalizing makes leopard seals naturally suited for acoustic surveys, as are conducted with cetaceans, allowing researchers to gather most of what is known about them.
Behavior
Acoustic behavior
Leopard seals are very vocal underwater during the austral summer.
The male seals produce loud calls (153 to 177 dB 1 μPa at 1 m) for many hours each day. While singing the seal hangs upside down and rocks from side to side under the water. Their back is bent, the neck and cranial thoracic region (the chest) is inflated and as they call their chest pulses. The male calls can be split into two categories: vocalizing and silencing; vocalizing is when they are making noises underwater, and silencing noted as the breathing period at the air surface. Adult male leopard seals have only a few stylized calls, some are like bird or cricket-like trills yet others are low haunting moans. Scientists have identified five distinctive sounds that male leopard seals make, which include: the high double trill, medium single trill, low descending trill, low double trill, and a hoot with a single low trill. These cadences of calls are believed to be a part of a long range acoustic display for territorial purposes, or to attract a potential mate.
The leopard seals have age-related differences in their calling patterns, just like birds. The younger male seals have many different types of variable calls, but the mature male seals have only a few, highly stylized calls. Each male leopard seal produces these individual calls, and can arrange their few call types into individually distinctive sequences (or songs). The acoustic behavior of the leopard seal is believed to be linked to their breeding behaviour. In male seals, vocalizing coincides with the timing of their breeding season, which falls between November and the first week of January; captive female seals vocalize when they have elevated reproductive hormones. Conversely, a female leopard seal can attribute calls to their environment as well; however, usually it is to gain the attention of a pup, after getting back from a forage for food.
Breeding habits
Since leopard seals live in an area difficult for humans to survive in, not much is known on their reproduction and breeding habits. However, it is known that their breeding system is polygynous, meaning that males mate with multiple females during the mating period. Females reach sexual maturity between the ages of three and seven, and can give birth to a single pup during the summer on the floating ice floes of the Antarctic pack ice; males reach sexual maturity around the age of six or seven years. Mating occurs from December to January, shortly after the pups are weaned when the female seal is in estrus. In preparation for the pups, the females dig a circular hole in the ice as a home for the pup. A newborn pup weighs around 66 pounds and are usually with their mother for a month, before they are weaned off. The male leopard seal does not participate in taking care of the pup, and goes back to its solitary lifestyle after the breeding season. Most leopard seal breeding take place on a pack of ice.
Five research voyages were made to Antarctica in 1985, 1987 and 1997–1999 to look at leopard seals. They sighted seal pups from the beginning of November to the end of December, and noticed that there was about one pup for every three adults, and they also noticed that most of the adults were staying away from other adults during this season, and when they were seen in groups they showed no sign of interaction. Leopard seal pups mortality rate within the first year is close to 25%.
Vocalization is thought to be important in breeding, since males are much more vocal around this time. Mating takes place in the water, and then the male leaves the female to care for the pup, which the female gives birth to after an average gestation period of 274 days.
Research shows that on average, the aerobic dive limit for juvenile seals is around 7 minutes, which means that during the winter months juvenile leopard seals do not eat krill, which is a major part of older seals' diets, since krill is found deeper during this time. This might occasionally lead to co-operative hunting. Co-operative hunting of leopard seals on Antarctic fur seal pups has been witnessed, which could be a mother helping her older pup, or could also be female-male couple interactions, to increase their hunting productivity.
Foraging behavior
The only natural predator of leopard seals is the orca. The seal's canine teeth are up to long. It feeds on a wide variety of creatures. Young leopard seals usually eat mostly krill, squid and fish. Adult seals probably switch from krill to more substantial prey, including king, Adélie, rockhopper, gentoo, emperor and chinstrap penguins, and less frequently, Weddell, crabeater, Ross, and young southern elephant seals. Leopard seals are also known to take fur seal pups.
Around the sub-Antarctic island of South Georgia, the Antarctic fur seal (Arctocephalus gazella) is the main prey. Antarctic krill, southern elephant seal pups and petrels such as the diving petrel and the cape petrel have also been taken as prey. Vagrant leopard seals in New Zealand have been observed preying on chondrichthyans; elephantfish, ghost sharks, and spiny dogfish were recorded as prey items. Additionally, this population of leopard seals and those in Australia were noted to bear wounds from chimaeriforms and stingrays respectively.
When hunting penguins, the leopard seal patrols the waters near the edges of the ice, almost completely submerged, waiting for the birds to enter the ocean. It kills the swimming bird by grabbing the feet, then shaking the penguin vigorously and beating its body against the surface of the water repeatedly until the penguin is dead. Previous reports stating the leopard seal skins its prey before feeding have been found to be incorrect. Lacking the teeth necessary to slice its prey into manageable pieces, it flails its prey from side to side tearing and ripping it into smaller pieces. Krill meanwhile, is eaten by suction, and strained through the seal's teeth, allowing leopard seals to switch to different feeding styles. Such generalization and adaptations may be responsible for the seal's success in the challenging Antarctic ecosystem.
Physiology and research
Leopard seals' heads and front flippers are extremely large in comparison to other phocids. Their large front flippers are used to steer themselves through the water column making them extremely agile while hunting. They use their front flippers similarly to sea lions (otariids) and leopard seal females are larger than males. They are covered in a thick layer of blubber that helps to keep them warm while in the cold temperatures of the Antarctic. This layer of blubber also helps to streamline their body making them more hydrodynamic. This is essential when hunting small prey items such as penguins because speed is necessary. Scientists take blubber thickness, girth, weight, and length measurements of leopard seals to learn about their average weight, health, and population as a whole. These measurements are then used to calculate their energetics which is the amount of energy and food it takes for them to survive as a species. They also have incredible diving capabilities. This information can be obtained by scientists by attaching transmitters to the seals after they are tranquilized on the ice. These devices are called satellite-linked time depth recorders (SLDRs) and time-depth recorders (TDRs). Scientists attach this device usually to the head of the animal and it records depth, bottom time, total dive time, date and time, surface time, haul out time, pitch and roll, and total number of dives. This information is sent to a satellite where scientists from anywhere in the world can collect the data, allowing them to learn more about leopard seals diet and foraging habits. With this information, scientists are able to calculate and better understand their diving physiology. They are primarily shallow divers but they do dive deeper than 80 meters in search for food. They are able to complete these dives by collapsing their lungs and re-inflating them at the surface. This is possible by increasing surfactant which coats the alveoli in the lungs for re-inflation. They also have a reinforced trachea to prevent collapse at great depth pressures.
Relationships with humans
Leopard seals are large predators presenting a potential risk to humans. However, attacks on humans are rare. Most human perceptions of leopard seals are shaped by historic encounters between humans and leopard seals that occurred during the early days of Antarctic exploration. Examples of aggressive behaviour, stalking and attacks are rare, but have been documented. A large leopard seal attacked Thomas Orde-Lees (1877–1958), a member of Sir Ernest Shackleton's Imperial Trans-Antarctic Expedition of 1914–1917, when the expedition was camping on the sea ice. The "sea leopard", about long and , chased Orde-Lees on the ice. He was saved only when another member of the expedition, Frank Wild, shot the animal.
In 1985, Canadian-British explorer Gareth Wood was bitten twice on the leg when a leopard seal tried to drag him off the ice and into the sea. His companions managed to save him by repeatedly kicking the animal in the head with the spiked crampons on their boots. On 26 September 2021, near the dive site Spaniard Rock at Simon's Town, South Africa, three spear-fishermen encountered a leopard seal while spearing approximately 400 m offshore. The seal attacked them and, while they were swimming back to shore, disarmed them of their flippers and spearguns and kept harassing the men over the course of half an hour, inflicting multiple bite and puncture wounds.
In 2003, biologist Kirsty Brown of the British Antarctic Survey was killed by a leopard seal while conducting research snorkeling in Antarctica. This was the first recorded human fatality attributed to a leopard seal. Brown was part of a team of four researchers taking part in an underwater survey at South Cove, near the U.K.'s Rothera Research Station. Brown and another researcher, Richard Burt, were snorkeling in the water. Burt was snorkeling at a distance of 15 metres (nearly 50 feet) from Brown when the team heard a scream and saw Brown disappear deeper into the water. She was rescued by her team, but they were unable to resuscitate her. It was later revealed that the seal had held Brown underwater for around six minutes at a depth of up to , drowning her. Furthermore, she suffered a total of 45 separate injuries (bites and scratches), most of which were concentrated around her head and neck.
In a report read at the inquiry into Brown's death, Professor Ian Boyd from the University of St Andrews stated that the seal may have mistaken her for a fur seal, or been frightened by her presence and attacked in defence; Professor Boyd said that leopard seal attacks on humans were extremely rare, but warned that they may potentially become more common due to increased human presence in Antarctica. The coroner recorded the cause of death as “accidental” and “caused by drowning due to a leopard seal attack”.
Leopard seals have shown a predilection for attacking the black, torpedo-shaped pontoons of rigid inflatable boats, leading researchers to equip their craft with special protective guards to prevent them from being punctured. On the other hand, Paul Nicklen, a National Geographic magazine photographer, captured pictures of a leopard seal bringing live, injured, and then dead penguins to him, possibly in an attempt to teach the photographer how to hunt.
Conservation
From a conservation standpoint, the only known predators of the leopard seals are orcas and sharks. Because of their limited subpolar distribution in the Antarctic, they may be at risk as polar ice caps diminish with global warming. In the wild, leopard seals can live up to 26 years old. Leopard seal hunting is regulated by the Antarctic Treaty and the Convention for the Conservation of Antarctic Seals (CCAS).
| Biology and health sciences | Pinnipeds | Animals |
385334 | https://en.wikipedia.org/wiki/List%20of%20particles | List of particles | This is a list of known and hypothesized microscopic particles in particle physics, condensed matter physics and cosmology.
Standard Model elementary particles
Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally.
Fermions
Fermions are one of the two fundamental classes of particles, the other being bosons. Fermion particles are described by Fermi–Dirac statistics and have quantum numbers described by the Pauli exclusion principle. They include the quarks and leptons, as well as any composite particles consisting of an odd number of these, such as all baryons and many atoms and nuclei.
Fermions have half-integer spin; for all known elementary fermions this is . All known fermions except neutrinos, are also Dirac fermions; that is, each known fermion has its own distinct antiparticle. It is not known whether the neutrino is a Dirac fermion or a Majorana fermion. Fermions are the basic building blocks of all matter. They are classified according to whether they interact via the strong interaction or not. In the Standard Model, there are 12 types of elementary fermions: six quarks and six leptons.
Quarks
Quarks are the fundamental constituents of hadrons and interact via the strong force. Quarks are the only known carriers of fractional charge, but because they combine in groups of three quarks (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except that they carry the opposite electric charge (for example the up quark carries charge +, while the up antiquark carries charge −), color charge, and baryon number. There are six flavors of quarks; the three positively charged quarks are called "up-type quarks" while the three negatively charged quarks are called "down-type quarks".
Leptons
Leptons do not interact via the strong interaction. Their respective antiparticles are the antileptons, which are identical, except that they carry the opposite electric charge and lepton number. The antiparticle of an electron is an antielectron, which is almost always called a "positron" for historical reasons. There are six leptons in total; the three charged leptons are called "electron-like leptons", while the neutral leptons are called "neutrinos". Neutrinos are known to oscillate, so that neutrinos of definite flavor do not have definite mass: Instead, they exist in a superposition of mass eigenstates. The hypothetical heavy right-handed neutrino, called a "sterile neutrino", has been omitted.
Bosons
Bosons are one of the two fundamental particles having integral spinclasses of particles, the other being fermions. Bosons are characterized by Bose–Einstein statistics and all have integer spins. Bosons may be either elementary, like photons and gluons, or composite, like mesons.
According to the Standard Model, the elementary bosons are:
The Higgs boson is postulated by the electroweak theory primarily to explain the origin of particle masses. In a process known as the "Higgs mechanism", the Higgs boson and the other gauge bosons in the Standard Model acquire mass via spontaneous symmetry breaking of the SU(2) gauge symmetry. The Minimal Supersymmetric Standard Model (MSSM) predicts several Higgs bosons. On 4 July 2012, the discovery of a new particle with a mass between was announced; physicists suspected that it was the Higgs boson. Since then, the particle has been shown to behave, interact, and decay in many of the ways predicted for Higgs particles by the Standard Model, as well as having even parity and zero spin, two fundamental attributes of a Higgs boson. This also means it is the first elementary scalar particle discovered in nature.
Elementary bosons responsible for the four fundamental forces of nature are called force particles (gauge bosons). The strong interaction is mediated by the gluon, the weak interaction is mediated by the W and Z bosons, electromagnetism by the photon, and gravity by the graviton, which is still hypothetical.
Composite particles
Composite particles are bound states of elementary particles.
Hadrons
Hadrons are defined as strongly interacting composite particles. Hadrons are either:
Composite fermions (especially 3 quarks), in which case they are called baryons.
Composite bosons (especially 2 quarks), in which case they are called mesons.
Quark models, first proposed in 1964 independently by Murray Gell-Mann and George Zweig (who called quarks "aces"), describe the known hadrons as composed of valence quarks and/or antiquarks, tightly bound by the color force, which is mediated by gluons. (The interaction between quarks and gluons is described by the theory of quantum chromodynamics.) A "sea" of virtual quark-antiquark pairs is also present in each hadron.
Baryons
Ordinary baryons (composite fermions) contain three valence quarks or three valence antiquarks each.
Nucleons are the fermionic constituents of normal atomic nuclei:
Protons, composed of two up and one down quark (uud)
Neutrons, composed of two down and one up quark (ddu)
Hyperons, such as the Λ, Σ, Ξ, and Ω particles, which contain one or more strange quarks, are short-lived and heavier than nucleons. Although not normally present in atomic nuclei, they can appear in short-lived hypernuclei.
A number of charmed and bottom baryons have also been observed.
Pentaquarks consist of four valence quarks and one valence antiquark.
Other exotic baryons may also exist.
Mesons
Ordinary mesons are made up of a valence quark and a valence antiquark. Because mesons have integer spin (0 or 1) and are not themselves elementary particles, they are classified as "composite" bosons, although being made of elementary fermions. Examples of mesons include the pion, kaon, and the J/ψ. In quantum hadrodynamics, mesons mediate the residual strong force between nucleons.
At one time or another, positive signatures have been reported for all of the following exotic mesons but their existences have yet to be confirmed.
A tetraquark consists of two valence quarks and two valence antiquarks;
A glueball is a bound state of gluons with no valence quarks;
Hybrid mesons consist of one or more valence quark–antiquark pairs and one or more real gluons.
Atomic nuclei
Atomic nuclei typically consist of protons and neutrons, although exotic nuclei may consist of other baryons, such as hypertriton which contains a hyperon. These baryons (protons, neutrons, hyperons, etc.) which comprise the nucleus are called nucleons. Each type of nucleus is called a "nuclide", and each nuclide is defined by the specific number of each type of nucleon.
"Isotopes" are nuclides which have the same number of protons but differing numbers of neutrons.
Conversely, "isotones" are nuclides which have the same number of neutrons but differing numbers of protons.
"Isobars" are nuclides which have the same total number of nucleons but which differ in the number of each type of nucleon. Nuclear reactions can change one nuclide into another.
Atoms
Atoms are the smallest neutral particles into which matter can be divided by chemical reactions. An atom consists of a small, heavy nucleus surrounded by a relatively large, light cloud of electrons. An atomic nucleus consists of 1 or more protons and 0 or more neutrons. Protons and neutrons are, in turn, made of quarks. Each type of atom corresponds to a specific chemical element. To date, 118 elements have been discovered or created.
Exotic atoms may be composed of particles in addition to or in place of protons, neutrons, and electrons, such as hyperons or muons. Examples include pionium () and quarkonium atoms.
Leptonic atoms
Leptonic atoms, named using -onium, are exotic atoms constituted by the bound state of a lepton and an antilepton. Examples of such atoms include positronium (), muonium (), and "true muonium" (). Of these positronium and muonium have been experimentally observed, while "true muonium" remains only theoretical.
Molecules
Molecules are the smallest particles into which a substance can be divided while maintaining the chemical properties of the substance. Each type of molecule corresponds to a specific chemical substance. A molecule is a composite of two or more atoms. Atoms are combined in a fixed proportion to form a molecule. Molecule is one of the most basic units of matter.
Ions
Ions are charged atoms (monatomic ions) or molecules (polyatomic ions). They include cations which have a net positive charge, and anions which have a net negative charge.
Other categories
Goldstone bosons are a massless excitation of a field that has been spontaneously broken. The pions are quasi-goldstone bosons (quasi- because they are not exactly massless) of the broken chiral isospin symmetry of quantum chromodynamics.
Parton, is a generic term coined by Feynman for the sub-particles making up a composite particle – at that time a baryon – hence, it originally referred to what are now called "quarks" and "gluons".
Odderon, a particle composed of an odd number of gluons, detected in 2021.
Quasiparticles
Quasiparticles are effective particles that exist in many particle systems. The field equations of condensed matter physics are remarkably similar to those of high energy particle physics. As a result, much of the theory of particle physics applies to condensed matter physics as well; in particular, there are a selection of field excitations, called quasi-particles, that can be created and explored. These include:
Anyons are a generalization of fermions and bosons in two-dimensional systems like sheets of graphene that obeys braid statistics.
Excitons are bound states of an electron and a hole.
Magnons are coherent excitations of electron spins in a material.
Phonons are vibrational modes in a crystal lattice.
Plasmons are coherent excitations of a plasma.
Polaritons are mixtures of photons with other quasi-particles.
Polarons are moving, charged (quasi-) particles that are surrounded by ions in a material.
Hypothetical particles
Graviton
The graviton is a hypothetical particle that has been included in some extensions to the Standard Model to mediate the gravitational force. It is in a peculiar category between known and hypothetical particles: As an unobserved particle that is not predicted by, nor required for the Standard Model, it belongs in the table of hypothetical particles. But gravitational force itself is a certainty, and expressing that known force in the framework of a quantum field theory requires a boson to mediate it.
If it exists, the graviton is expected to be massless because the gravitational force has a very long range, and appears to propagate at the speed of light. The graviton must be a spin-2 boson because the source of gravitation is the stress–energy tensor, a second-order tensor (compared with electromagnetism's spin-1 photon, the source of which is the four-current, a first-order tensor). Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way that gravitational interactions do. This result suggests that, if a massless spin-2 particle is discovered, it must be the graviton.
Dark matter candidates
Many hypothetical particle candidates for dark matter have been proposed like weakly interacting massive particles (WIMP), weakly interacting slender particles or a feebly interacting particles (FIP).
Dark energy candidates
Hypothetical particle candidates to explain dark energy include the chameleon particle and the acceleron.
Auxiliary particles
Virtual particles are mathematical tools used in calculations that exhibits some of the characteristics of an ordinary particle but do not obey the mass-shell relation. These particles are unphysical and unobservable. These include:
Ghost particles, like Faddeev–Popov ghosts and Pauli–Villars ghosts
Spurions, auxiliary field in a quantum field theory that can be used to parameterize any symmetry
Soft photons, photons with energies below detectable in experiment.
There are also instantons are field configurations which are a local minimum of the Yang–Mills field equation. Instantons are used in nonperturbative calculations of tunneling rates. Instantons have properties similar to particles, specific examples include:
Calorons, finite temperature generalization of instantons.
Merons, a field configuration which is a non-self-dual solution of the Yang–Mills field equation. The instanton is believed to be composed of two merons.
Sphalerons are a field configuration which is a saddle point of the Yang–Mills field equations. Sphalerons are used in nonperturbative calculations of non-tunneling rates.
Renormalons, a possible type of singularity arising when using Borel summation. It is a counterpart of an instanton singularity.
Classification by speed
A bradyon (or tardyon) travels slower than the speed of light in vacuum and has a non-zero, real rest mass.
A luxon travels as fast as light in vacuum and has no rest mass.
A tachyon is a hypothetical particle that travels faster than the speed of light so they would paradoxically experience time in reverse (due to inversion of the theory of relativity) and would violate the known laws of causality. A tachyon has an imaginary rest mass.
| Physical sciences | Subatomic particles: General | Physics |
385416 | https://en.wikipedia.org/wiki/Momordica%20charantia | Momordica charantia | Momordica charantia, (commonly called bitter melon, cerassee, goya, bitter apple, bitter gourd, bitter squash, balsam-pear, karavila and many more names listed below) is a tropical and subtropical vine of the family Cucurbitaceae,widely grown in Asia, Africa, and the Caribbean for its edible fruit. Its many varieties differ substantially in the shape and bitterness of the fruit.
Bitter melon originated in Africa, where it was a dry-season staple food of ǃKung hunter-gatherers. Wild or semi-domesticated variants spread across Asia in prehistory, and it was likely fully domesticated in Southeast Asia. It is widely used in the cuisines of East Asia, South Asia, and Southeast Asia.
Description
This herbaceous, tendril-bearing vine grows up to in length. It bears simple, alternate leaves across, with three to seven deeply separated lobes. Each plant bears separate yellow male and female flowers. In the Northern Hemisphere, flowering occurs from June to July, and fruiting from September to November. It is a frost-tender annual in the temperate zone and a perennial in the tropics. It grows best in the USDA zones 9 to 11.
The fruit has a distinctive warty exterior and an oblong shape. It is hollow in cross-section, with a relatively thin layer of flesh surrounding a central seed cavity filled with large, flat seeds and pith. The fruit is most often eaten green, or as it is beginning to turn yellow. At this stage, the fruit's flesh is crunchy and watery in texture, similar to cucumber, chayote, or green bell pepper, but bitter. The skin is tender and edible. Seeds and pith appear white in unripe fruits; they are not intensely bitter and can be removed before cooking.
Some sources claim the flesh (rind) becomes somewhat tougher and more bitter with age, but other sources claim that at least for the common Chinese variety the skin does not change and bitterness decreases with age. The Chinese variety is best harvested light green possibly with a slight yellow tinge or just before. The pith becomes sweet and intensely red; it can be eaten uncooked in this state and is a popular ingredient in some Southeast Asian salads.
When the fruit is fully ripe, it turns orange and soft and splits into segments that curl back to expose seeds covered in bright red pulp.
Varieties
Bitter melon arrive in a variety of shapes and sizes. The cultivar common in China is long, oblong with bluntly tapering ends and pale green in colour, with a gently undulating, warty surface. The bitter melon more typical of India has a narrower shape with pointed ends and a surface covered with jagged, triangular "teeth" and ridges. It is green to white in colour. Between these two extremes are any number of intermediate forms. Some bear miniature fruit of only in length, which may be served individually as stuffed vegetables. These miniature fruits are popular in Bangladesh, India, Pakistan, Nepal, and other countries in South Asia. The sub-continent variety is most popular in Bangladesh and India.
Pests
M. charantia is one of the main hosts of Bactrocera tau, a fly known to prefer Cucurbitaceae.
Uses
Cooking
Bitter melon is generally consumed cooked in the green or early yellowing stage. The young shoots and leaves of the bitter melon may also be eaten as greens. The fruit is bitter raw and can be soaked in cold water and drained to remove some of those strong flavours.
China
In Chinese cuisine, bitter melon (, ) is used in stir-fries (often with pork and douchi), soups, dim sum, and herbal teas (gohyah tea). It has also been used in place of hops as the bittering ingredient in some beers in China and Okinawa.
India
Bitter gourd is commonly eaten throughout India. In North Indian cuisine, it is often served with yogurt on the side to offset the bitterness, used in curry such as sabzi, or stuffed with spices and then cooked in oil.
In South Indian cuisine, it is used in numerous dishes such as thoran / thuvaran (mixed with grated coconut), pavaikka mezhukkupuratti (stir-fried with spices), theeyal (cooked with roasted coconut), and pachadi (which is considered a medicinal food for diabetics), making it vital in Malayali's diet. Other popular recipes include preparations with curry, deep-frying with peanuts or other ground nuts, and Kakara kaya pulusu () in Telugu, a tamarind-based soup with mini shallots or fried onions and other spices, thickened with chickpea flour. In Karnataka, bitter melon is known as hāgalakāyi () in Kannada; in Tamil Nadu it is known as paagarkaai or pavakai () in Tamil. In these regions, a special preparation called pagarkai pitla, a kind of sour koottu, is common. Also commonly seen is kattu pagarkkai, a curry in which bitter melons are stuffed with onions, cooked lentils, and grated coconut mix, then tied with thread and fried in oil. In the Konkan region of Maharashtra, salt is added to the finely chopped bitter gourd, known as karle () in Marathi, and then it is squeezed, removing its bitter juice to some extent. After frying this with different spices, the less bitter and crispy preparation is served with grated coconut. Bitter melon is known as karate () in Goa where it is used widely in Goan cuisine. In Bengal, where it is known as korola (করলা) or ucche (উচ্ছে) in Bengali, bitter melon is often simply eaten boiled and mashed with salt, mustard oil, sliced thinly and deep fried, added to lentils to make "tetor" dal (bitter lentils), and is a key ingredient of the Shukto, a Bengali vegetable medley that is a mixture of several vegetables like raw banana, drumstick stems, bori, and sweet potato.
In northern India and Nepal, bitter melon, known as tite karela () in Nepali, is prepared as a fresh pickle. For this, the vegetable is cut into cubes or slices, and sautéed with oil and a sprinkle of water. When it is softened and reduced, it is crushed in a mortar with a few cloves of garlic, salt, and a red or green pepper. It is also eaten sautéed to golden brown, stuffed, or as a curry on its own or with potatoes.
Myanmar
In Burmese cuisine, bitter melon is sauteéd with garlic, tomatoes, spices, and dried shrimp and is served as an accompaniment to other dishes. Such a dish is available at street stalls and deli counters throughout the country.
Sri Lanka
It is called () in Sri Lanka and it is an ingredient in many different curry dishes (e.g., karawila curry and karawila sambol) which are served mainly with rice in a main meal. Sometimes large grated coconut pieces are added, which is more common in rural areas. Karawila juice is also sometimes served there.
Japan
Bitter melon, known as gōyā () in Okinawan, and in Japanese (although the Okinawan word gōyā is also used), is a significant ingredient in Okinawan cuisine, and is increasingly used in Japanese cuisine beyond that island.
Pakistan
In Pakistan, where it is known as karela () in Urdu-speaking areas, bitter melon is often cooked with onions, red chili powder, turmeric powder, salt, coriander powder, and a pinch of cumin seeds. Another dish in Pakistan calls for whole, unpeeled bitter melon to be boiled and then stuffed with cooked minced beef, served with either hot tandoori bread, naan, chappati, or with khichri (a mixture of lentils and rice).
Indonesia
In Indonesian cuisine, bitter melon, known as pare in Javanese and Indonesian (also paria), is prepared in various dishes, such as gado-gado, and also stir-fried, cooked in coconut milk, or steamed. In Christian areas in Eastern Indonesia it is cooked with pork and chili, the sweetness of the pork balancing against the bitterness of the vegetable.
Vietnam
In Vietnamese cuisine, raw bitter melon slices known as mướp đắng or khổ qua in Vietnamese, eaten with dried meat floss and bitter melon soup with shrimp, are common dishes. Bitter melons stuffed with ground pork are commonly served as a summer soup in the south. It is also used as the main ingredient of stewed bitter melon. This dish is usually cooked for the Tết holiday, where its "bitter" name is taken as a reminder of the bitter living conditions experienced in the past.
Thailand
In Thai cuisine, the Chinese variety of green bitter melon, mara () in Thai, is prepared stuffed with minced pork and garlic, in a clear broth. It is also served sliced and stir-fried with garlic and fish sauce until just tender. Varieties found in Thailand range from large fruit to small fruit. The smallest fruit variety (mara khii nok) is generally not cultivated but is occasionally found in the wild and is considered the most nutritious variety.
Philippines
In the cuisine of the Philippines, bitter melon, known as Ampalaya in Filipino and Paria in Ilokano, may be stir-fried with ground beef and oyster sauce, or with eggs and diced tomato. The dish pinakbet, popular in the Ilocos region of Luzon, consists mainly of bitter melons, eggplant, okra, string beans, tomatoes, lima beans, and other various regional vegetables all stewed together with a little bagoong-based stock.
The name of the fruit is rooted in the bitterness of its taste, (Filipino: Ampait) which means bitter. In pre-colonial Spanish in Ilocandia, the name is locally translated to Amparia and Ampalaya in the Filipino language.
Trinidad and Tobago
In Trinidad and Tobago, bitter melons, known as caraille or carilley, are usually sautéed with onion, garlic, and scotch bonnet pepper until almost crisp.
Africa
In Mauritius, bitter melons are known as margose or margoze.
Herbal medicine
Bitter melon has been used in various Asian and African herbal medicine systems. In the traditional medicine of India, different parts of the plant are used.
Research
Momordica charantia does not significantly decrease fasting blood glucose levels or A1c, indicators of blood glucose control, when taken in capsule or tablet form.
Adverse effects
A possible side effect is gastrointestinal discomfort.
Adverse effects in pregnancy
Bitter melon is contraindicated in pregnant women because it can induce bleeding, contractions, and miscarriage.
Bitter melon tea
Bitter melon tea, also known as gohyah (goya) tea, is an herbal tea made from an infusion of dried slices of the bitter melon. It is sold as a medicinal tea, and a culinary vegetable.
Gohyah is not listed in the Grieve's herbal database, the MPNA database at University of Michigan (Medicinal Plants of Native America, see Native American ethnobotany), or in the Phytochemical Database of the USDA – Agricultural Research Service (ARS) – National Plant Germplasm System NGRL
Subspecies
The plant has one subspecies and four varieties:
Momordica charantia var. abbreviata
Momordica charantia var. charantia
Momordica charantia ssp. macroloba
Momordica charantia L. var. muricata
Momordica charantia var. pavel
M. charantia var. charantia and pavel are the long-fruited varieties, whereas M. charantia var. muricata, macroloba and abbreviata feature smaller fruits.
Gallery
Plant
Dishes and other uses
| Biology and health sciences | Botanical fruits used as culinary vegetables | Plants |
385786 | https://en.wikipedia.org/wiki/Victoria%20%28plant%29 | Victoria (plant) | Victoria or giant waterlily is a genus of aquatic herbs in the plant family Nymphaeaceae. Its leaves have a remarkable size: Victoria boliviana produces leaves up to in width. The genus name was given in honour of Queen Victoria of the United Kingdom.
Description
Vegetative characteristics
Victoria species are rhizomatous, aquatic, short-lived, perennial herbs with tuberous rhizomes bearing contractile adventitious roots. The floating leaves are peltate and orbicular. The margin of the lamina is raised. The lamina possesses stomatodes (i.e. microscopic perforations). The abaxial leaf surface posesses prominent, reticulate venation. In Victoria amazonica the leaves are glaberous, with long, hard spines and the underside is red. In Victoria cruziana the leaves are fuzzy with soft spines and the underside is purple. The shape of the rims is also different.
Generative characteristics
The up to 25 cm wide, nocturnal, thermogenic, solitary, actinomorphic, chasmogamous, protogynous flowers have prickly pedicels with 4 primary and 8 secondary air canals. The flowers have four prickly, petaloid, 12 cm long, and 7–8 cm wide sepals. The 50-100 petals gradually transition towards the shape of the stamens, however there is an abrupt change between the innermost petals to the outermost staminodia. The androeceum consists of 150–200 stamens. The gynoecium consists of 30–44 fused carpels. The 0–15 cm wide, spiny, irregularly dehiscencent fruit bears arillate, glabrous, smooth or granular seeds. Proliferating pseudanthia are absent.
Cytology
The ploidy level is 2x and the chromosome count ranges from 2n = 20 to 2n = 24.
Taxonomy
Victoria was published by Robert Hermann Schomburgk in September 1837. The type species is Victoria regina The genus has two synonyms, both published within the same year with the same name: Victoria published by John Lindley in October 1837 and Victoria published by John Edward Gray in December 1837. There is however disagreement over the correct taxon authority. Victoria is seen as correct by several sources, but Victoria is also widely regarded as correct, despite being published a month later.
Species
Evolutionary relationships
Together with the genus Euryale, Victoria may be placed within the genus Nymphaea, rendering it paraphyletic in its current circumscription.
Ecology
Habitat
It occurs in lakes and streams.
Pollination
Victoria flowers are pollinated by Cyclocephala beetles.
Use
Horticulture
Victoria is a popular ornamental plant.
Food
The seeds, petioles, and rhizomes are used as food.
Other uses
Root extracts are used as black dye.
| Biology and health sciences | Nymphaeales | Plants |
5810488 | https://en.wikipedia.org/wiki/CW%20Leonis | CW Leonis | CW Leonis or IRC +10216 is a variable carbon star that is embedded in a thick dust envelope. It was first discovered in 1969 by a group of astronomers led by Eric Becklin, based upon infrared observations made with the 62-inch Caltech Infrared Telescope at Mount Wilson Observatory. Its energy is emitted mostly at infrared wavelengths. At a wavelength of 5 μm, it was found to have the highest flux of any object outside the Solar System.
Properties
CW Leonis is believed to be in a late stage of its life, blowing off its own sooty atmosphere to form a white dwarf. Based upon isotope ratios of magnesium, the initial mass of this star has been constrained to lie between 3–5 solar masses. The mass of the star's core, and the final mass of the star once it becomes a white dwarf, is about 0.7–0.9 solar masses. Its bolometric luminosity varies over the course of a 649-day pulsation cycle, ranging from a minimum of about 6,250 times the Sun's luminosity up to a peak of around 15,800 times. The overall output of the star is best represented by a luminosity of . The brightness of the star varies by about two magnitudes over its pulsation period, and may have been increasing over a period of years. One study finds an increase in the mean brightness of about a magnitude between 2004 and 2014. Many studies of this star are done at infrared wavelengths because of its very red colour; published visual magnitudes are uncommon and often dramatically different. The Guide Star Catalog from 2006 gives an apparent visual magnitude of 19.23. The ASAS-SN variable star catalog based on observations from 2014 to 2018 reports a mean magnitude of 17.56 and an amplitude of 0.68 magnitudes. An even later study gives a mean magnitude of 14.5 and an amplitude of 2.0 magnitudes.
The carbon-rich gaseous envelope surrounding this star is at least 69,000 years old and the star is losing about solar masses per year. The extended envelope contains at least 1.4 solar masses of material. Speckle observations from 1999 show a complex structure to this dust envelope, including partial arcs and unfinished shells. This clumpiness may be caused by a magnetic cycle in the star that is comparable to the solar cycle in the Sun and results in periodic increases in mass loss.
Various chemical elements and about 50 molecules have been detected in the outflows from CW Leonis, among others nitrogen, oxygen and water, silicon, and iron. One theory was that the star was once surrounded by comets that melted once the star started expanding, but water is now thought to form naturally in the atmospheres of all carbon stars.
Distance
If the distance to this star is assumed to be at the lower end of the estimate range, 120 pc, then the astrosphere surrounding the star spans a radius of about 84,000 AU. The star and its surrounding envelope are advancing at a velocity of more than 91 km/s through the surrounding interstellar medium. It is moving with a space velocity of [U, V, W] = [, , ] km s−1.
Companion
Several papers have suggested that CW Leonis has a close binary companion. ALMA and astrometric measurements may show orbital motion. The astrometric measurements, combined with a model including the companion, provide a parallax measurement showing that CW Leonis is the closest carbon star to the Earth.
| Physical sciences | Notable stars | Astronomy |
5813824 | https://en.wikipedia.org/wiki/Complex%20volcano | Complex volcano | A complex volcano, also called a compound volcano or a volcanic complex, is a mixed landform consisting of related volcanic centers and their associated lava flows and pyroclastic rock. They may form due to changes in eruptive habit or in the location of the principal vent area on a particular volcano. Stratovolcanoes can also form a large caldera that gets filled in by a lava dome, or else multiple small cinder cones, lava domes and craters may develop on the caldera's rim.
Although a comparatively unusual type of volcano, they are widespread in the world and in geologic history. Metamorphosed ash flow tuffs are widespread in the Precambrian rocks of northern New Mexico, which indicates that caldera complexes have been important for much of Earth's history. Yellowstone National Park is on three partly covered caldera complexes. The Long Valley Caldera in eastern California is also a complex volcano; the San Juan Mountains in southwestern Colorado are formed on a group of Neogene-age caldera complexes, and most of the Mesozoic and Cenozoic rocks of Nevada, Idaho, and eastern California are also caldera complexes and their erupted ash flow tuffs. The Bennett Lake Caldera in British Columbia and the Yukon Territory is another example of a Cenozoic (Eocene) caldera complex.
Examples
Akita-Yake-Yama (Honshū, Japan)
Asacha (Kamchatka Peninsula, Russia)
Asama (Honshū, Japan)
Kusatsu-Shirane (Kusatsu, Gunma, Japan)
Banahaw (Luzon, Philippines)
Bennett Lake Caldera (British Columbia/Yukon, Canada)
Cumbre Vieja (Canary Islands, Spain)
Mount Edziza volcanic complex (British Columbia, Canada)
Galeras (Colombia, South America)
Grozny Group (Kuril Islands, Russia)
Hakone (Japan)
Homa Mountain (Kenya)
Irazú Volcano (Costa Rica)
Ischia (Italy)
Kelimutu (Flores, Indonesia)
Las Pilas (Nicaragua)
Long Valley Caldera (California, United States)
Mount Marapi (Indonesia)
Mount Mazama (Oregon, United States)
McDonald Islands (Indian Ocean, Australia)
Mount Meager massif (British Columbia, Canada)
Morne Trois Pitons (Dominica)
Moutohora Island (New Zealand)
Pacaya (Guatemala)
Puyehue-Cordón Caulle (Chile)
Rincón de la Vieja Volcano (Costa Rica)
Silverthrone Caldera (British Columbia, Canada)
St. Andrew Strait (Admiralty Islands, Papua New Guinea)
Taal Volcano, (Batangas, Philippines)
Mount Talinis, (Negros Oriental, Philippines)
Taupō Volcano, (Taupō, New Zealand)
Teide, Canary Islands
Three Sisters (Oregon) (Oregon, United States)
Te Tatua-a-Riukiuta (Auckland, New Zealand)
Tongariro, (New Zealand)
Vesuvius, (Italy)
Valles Caldera (New Mexico, United States)
Yellowstone Caldera (Wyoming, United States)
| Physical sciences | Volcanology | Earth science |
7580913 | https://en.wikipedia.org/wiki/Basal%20sliding | Basal sliding | Basal sliding is the act of a glacier sliding over the bed due to meltwater under the ice acting as a lubricant. This movement very much depends on the temperature of the area, the slope of the glacier, the bed roughness, the amount of meltwater from the glacier, and the glacier's size.
The movement that happens to these glaciers as they slide is that of a jerky motion where any seismic events, especially at the base of glacier, can cause movement. Most movement is found to be caused by pressured meltwater or very small water-saturated sediments underneath the glacier. This gives the glacier a much smoother surface on which to move as opposed to a harsh surface that tends to slow the speed of the sliding. Although meltwater is the most common source of basal sliding, it has been shown that water-saturated sediment can also play up to 90% of the basal movement these glaciers make.
Most activity seen from basal sliding is within thin glacier that are resting on a steep slope, and this most commonly happens during the summer seasons when surface meltwater runoff peaks. Factors that can slow or stop basal sliding relate to the glacier's composition and also the surrounding environment. Glacier movement is resisted by debris, whether it is inside the glacier or under the glacier. This can affect the amount of movement that is made by the glacier by a large percentage especially if the slope on which it lies is low. The traction caused by this sediment can halt a steadily moving glacier if it interferes with the underlying sediment or water that was helping to carry it.
The Great Lakes were created due to basal erosion as a result of sliding over relatively weak bedrock.
| Physical sciences | Glaciology | Earth science |
7587207 | https://en.wikipedia.org/wiki/Absolute%20configuration | Absolute configuration | In chemistry, absolute configuration refers to the spatial arrangement of atoms within a molecular entity (or group) that is chiral, and its resultant stereochemical description. Absolute configuration is typically relevant in organic molecules where carbon is bonded to four different substituents. This type of construction creates two possible enantiomers. Absolute configuration uses a set of rules to describe the relative positions of each bond around the chiral center atom. The most common labeling method uses the descriptors R or S and is based on the Cahn–Ingold–Prelog priority rules. R and S refer to and , Latin for right and left, respectively.
Chiral molecules can differ in their chemical properties, but are identical in their physical properties, which can make distinguishing enantiomers challenging. Absolute configurations for a chiral molecule (in pure form) are most often obtained by X-ray crystallography, although with some important limitations. All enantiomerically pure chiral molecules crystallise in one of the 65 Sohncke groups (chiral space groups). Alternative techniques include optical rotatory dispersion, vibrational circular dichroism, ultraviolet-visible spectroscopy, the use of chiral shift reagents in proton NMR and Coulomb explosion imaging.
History
Until 1951, it was not possible to obtain the absolute configuration of chiral compounds. It was at some time arbitrarily decided that (+)-glyceraldehyde was the -enantiomer. The configuration of other chiral compounds was then related to that of (+)-glyceraldehyde by sequences of chemical reactions. For example, oxidation of (+)-glyceraldehyde (1) with mercury oxide gives (−)-glyceric acid (2), a reaction that does not alter the stereocenter. Thus the absolute configuration of (−)-glyceric acid must be the same as that of (+)-glyceraldehyde. Oxidation of (+)-isoserine (3) by nitrous acid gives (−)-glyceric acid, establishing that (+)-isoserine also has the same absolute configuration. (+)-Isoserine can be converted by a two-stage process of bromination to (−)-3-bromo-2-hydroxy-propanoic acid (4) and zinc reduction to give (−)-lactic acid (5), therefore (−)-lactic acid also has the same absolute configuration. If a reaction gave the enantiomer of a known configuration, as indicated by the opposite sign of optical rotation, it would indicate that the absolute configuration is inverted.
In 1951, Johannes Martin Bijvoet for the first time used in X-ray crystallography the effect of anomalous dispersion, which is now referred to as resonant scattering, to determine absolute configuration. The compound investigated was (+)-sodium rubidium tartrate and from its configuration (R,R) it was deduced that the original guess for (+)-glyceraldehyde was correct.
Despite the tremendous and unique impact on access to molecular structures, X-ray crystallography poses some challenges. The process of crystallization of the target molecules is time- and resource-intensive, and can not be applied to relevant systems of interest such as many biomolecules (some proteins are an exception) and in situ catalysts. Another important limitation is that the molecule must contain "heavy" atoms (for example, bromine) to enhance the scattering. Furthermore, crucial distorsions of the signal arise from the influence of the nearest neighbors in any crystal structure and of solvents used during the crystallization process.
Just recently, novel techniques have been introduced to directly investigate the absolute configuration of single molecules in gas-phase, usually in combination with ab initio quantum mechanical theoretical calculations, therefore overcoming some of the limitations of the X-ray crystallography.
Conventions
By absolute configuration: R- and S-
The R/S system is an important nomenclature system for denoting enantiomers. This approach labels each chiral center R or S according to a system by which its substituents are each assigned a priority, according to the Cahn–Ingold–Prelog priority rules (CIP), based on atomic number. When the center is oriented so that the lowest-priority substituent of the four is pointed away from the viewer, the viewer will then see two possibilities: if the priority of the remaining three substituents decreases in clockwise direction, it is labeled R (for right); if it decreases in counterclockwise direction, it is S (for left).
(R) or (S) is written in italics and parentheses. If there are multiple chiral carbons, e.g. (1R,4S), a number specifies the location of the carbon preceding each configuration.
The R/S system also has no fixed relation to the system. For example, the side-chain one of serine contains a hydroxyl group, −OH. If a thiol group, −SH, were swapped in for it, the labeling would, by its definition, not be affected by the substitution. But this substitution would invert the molecule's R/S labeling, because the CIP priority of CH2OH is lower than that for CO2H but the CIP priority of CH2SH is higher than that for CO2H. For this reason, the system remains in common use in certain areas of biochemistry, such as amino acid and carbohydrate chemistry, because it is convenient to have the same chiral label for the commonly occurring structures of a given type of structure in higher organisms. In the system, nearly all naturally occurring amino acids are all , while naturally occurring carbohydrates are nearly all . All proteinogenic amino acids are S, except for cysteine, which is R.
By optical rotation: (+)- and (−)- or d- and l-
An enantiomer can be named by the direction in which it rotates the plane of polarized light. Clockwise rotation of the light traveling toward the viewer is labeled (+) enantiomer. Its mirror-image is labeled (−). The (+) and (−) isomers have been also termed d- and l- (for dextrorotatory and levorotatory); but, naming with d- and l- is easy to confuse with - and - labeling and is therefore discouraged by IUPAC.
By relative configuration: - and -
An optical isomer can be named by the spatial configuration of its atoms. The system (named after Latin dexter and laevus, right and left), not to be confused with the d- and l-system, see above, does this by relating the molecule to glyceraldehyde. Glyceraldehyde is chiral itself and its two isomers are labeled and (typically typeset in small caps in published work). Certain chemical manipulations can be performed on glyceraldehyde without affecting its configuration, and its historical use for this purpose (possibly combined with its convenience as one of the smallest commonly used chiral molecules) has resulted in its use for nomenclature. In this system, compounds are named by analogy to glyceraldehyde, which, in general, produces unambiguous designations, but is easiest to see in the small biomolecules similar to glyceraldehyde. One example is the chiral amino acid alanine, which has two optical isomers, and they are labeled according to which isomer of glyceraldehyde they come from. On the other hand, glycine, the amino acid derived from glyceraldehyde, has no optical activity, as it is not chiral (it's achiral).
The labeling is unrelated to (+)/(−) it does not indicate which enantiomer is dextrorotatory and which is levorotatory. Rather, it indicates the compound's stereochemistry relative to that of the dextrorotatory or levorotatory enantiomer of glyceraldehyde. The dextrorotatory isomer of glyceraldehyde is, in fact, the isomer. Nine of the nineteen -amino acids commonly found in proteins are dextrorotatory (at a wavelength of 589 nm), and -fructose is also referred to as levulose because it is levorotatory. A rule of thumb for determining the isomeric form of an amino acid is the "CORN" rule. The groups
COOH, R, NH2 and H (where R is the side-chain)
are arranged around the chiral center carbon atom. With the hydrogen atom away from the viewer, if the arrangement of the CO→R→N groups around the carbon atom as center is counter-clockwise, then it is the form. If the arrangement is clockwise, it is the form. As usual, if the molecule itself is oriented differently, for example, with H towards the viewer, the pattern may be reversed. The form is the usual one found in natural proteins. For most amino acids, the form corresponds to an S absolute stereochemistry, but is R instead for certain side-chains.
| Physical sciences | Stereochemistry | Chemistry |
16652317 | https://en.wikipedia.org/wiki/Linear%20motion | Linear motion | Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mathematically the displacement is given by:
The equivalent of displacement in rotational motion is the angular displacement measured in radians.
The displacement of an object cannot be greater than the distance because it is also a distance but the shortest one. Consider a person travelling to work daily. Overall displacement when he returns home is zero, since the person ends up back where he started, but the distance travelled is clearly not zero.
Velocity
Velocity refers to a displacement in one direction with respect to an interval of time. It is defined as the rate of change of displacement over change in time. Velocity is a vector quantity, representing a direction and a magnitude of movement. The magnitude of a velocity is called speed. The SI unit of speed is that is metre per second.
Average velocity
The average velocity of a moving body is its total displacement divided by the total time needed to travel from the initial point to the final point. It is an estimated velocity for a distance to travel. Mathematically, it is given by:
where:
is the time at which the object was at position and
is the time at which the object was at position
The magnitude of the average velocity is called an average speed.
Instantaneous velocity
In contrast to an average velocity, referring to the overall motion in a finite time interval, the instantaneous velocity of an object describes the state of motion at a specific point in time. It is defined by letting the length of the time interval tend to zero, that is, the velocity is the time derivative of the displacement as a function of time.
The magnitude of the instantaneous velocity is called the instantaneous speed. The instantaneous velocity equation comes from finding the limit as t approaches 0 of the average velocity. The instantaneous velocity shows the position function with respect to time. From the instantaneous velocity the instantaneous speed can be derived by getting the magnitude of the instantaneous velocity.
Acceleration
Acceleration is defined as the rate of change of velocity with respect to time. Acceleration is the second derivative of displacement i.e. acceleration can be found by differentiating position with respect to time twice or differentiating velocity with respect to time once. The SI unit of acceleration is or metre per second squared.
If is the average acceleration and is the change in velocity over the time interval then mathematically,
The instantaneous acceleration is the limit, as approaches zero, of the ratio and , i.e.,
Jerk
The rate of change of acceleration, the third derivative of displacement is known as jerk. The SI unit of jerk is . In the UK jerk is also referred to as jolt.
Jounce
The rate of change of jerk, the fourth derivative of displacement is known as jounce. The SI unit of jounce is which can be pronounced as metres per quartic second.
Formulation
In case of constant acceleration, the four physical quantities acceleration, velocity, time and displacement can be related by using the equations of motion.
Here,
is the initial velocity
is the final velocity
is acceleration
is displacement
is time
These relationships can be demonstrated graphically. The gradient of a line on a displacement time graph represents the velocity. The gradient of the velocity time graph gives the acceleration while the area under the velocity time graph gives the displacement. The area under a graph of acceleration versus time is equal to the change in velocity.
Comparison to circular motion
The following table refers to rotation of a rigid body about a fixed axis: is arc length, is the distance from the axis to any point, and is the tangential acceleration, which is the component of the acceleration that is parallel to the motion. In contrast, the centripetal acceleration, , is perpendicular to the motion. The component of the force parallel to the motion, or equivalently, perpendicular to the line connecting the point of application to the axis is . The sum is over from to particles and/or points of application.
The following table shows the analogy in derived SI units:
| Physical sciences | Classical mechanics | Physics |
2336870 | https://en.wikipedia.org/wiki/Bo%C3%B6tes%20Void | Boötes Void | The Boötes Void ( ) (colloquially referred to as the Great Nothing) is an approximately spherical region of space found in the vicinity of the constellation Boötes, containing only 60 galaxies instead of the 2,000 that should be expected from an area this large, hence its name. With a radius of 62 megaparsecs (nearly 330 million light-years across), it is one of the largest voids in the visible universe, and is referred to as a supervoid.
It was discovered in 1981 by Robert Kirshner as part of a survey of galactic redshifts. Its centre is located 700 million light-years from Earth, and at approximately right ascension and declination .
The Hercules Supercluster forms part of the near edge of the void.
Origins
There are no major apparent inconsistencies between the existence of the Boötes Void and the Lambda-CDM model of cosmological evolution.
The Boötes Void is theorized to have formed from the merger of smaller voids, much like the way in which soap bubbles coalesce to form larger bubbles. This would account for the small number of galaxies that populate a roughly tube-shaped region running through the middle of the void.
Confusion with Barnard 68
The Boötes Void has been often associated with images of Barnard 68, a dark nebula that does not allow light to pass through; however, the images of Barnard 68 are much darker than those observed of the Boötes Void, as the nebula is much closer and there are fewer stars in front of it, as well as its being a physical mass that blocks light passing through.
| Physical sciences | Notable patches of universe | Astronomy |
11295734 | https://en.wikipedia.org/wiki/Cloud%20storage | Cloud storage | Cloud storage is a model of computer data storage in which data, said to be on "the cloud", is stored remotely in logical pools and is accessible to users over a network, typically the Internet. The physical storage spans multiple servers (sometimes in multiple locations), and the physical environment is typically owned and managed by a cloud computing provider. These cloud storage providers are responsible for keeping the data available and accessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data.
Cloud storage services may be accessed through a colocated cloud computing service, a web service application programming interface (API) or by applications that use the API, such as cloud desktop storage, a cloud storage gateway or Web-based content management systems.
History
Cloud computing is believed to have been invented by J. C. R. Licklider in the 1960s with his work on ARPANET to connect people and data from anywhere at any time.
In 1983, CompuServe offered its consumer users a small amount of disk space that could be used to store any files they chose to upload.
In 1994, AT&T launched PersonaLink Services, an online platform for personal and business communication and entrepreneurship. The storage was one of the first to be all web-based, and referenced in their commercials as, "you can think of our electronic meeting place as the cloud." Amazon Web Services introduced their cloud storage service Amazon S3 in 2006, and has gained widespread recognition and adoption as the storage supplier to popular services such as SmugMug, Dropbox, and Pinterest. In 2005, Box announced an online file sharing and personal cloud content management service for businesses.
Architecture
Cloud storage is based on highly virtualized infrastructure and is like broader cloud computing in terms of interfaces, near-instant elasticity and scalability, multi-tenancy, and metered resources. Cloud storage services can be used from an off-premises service (Amazon S3) or deployed on-premises (ViON Capacity Services).
There are three types of cloud storage: a hosted object storage service, file storage, and block storage. Each of these cloud storage types offer their own unique advantages.
Examples of object storage services that can be hosted and deployed with cloud storage characteristics include Amazon S3, Oracle Cloud Storage and Microsoft Azure Storage, object storage software like Openstack Swift, object storage systems like EMC Atmos, EMC ECS and Hitachi Content Platform, and distributed storage research projects like OceanStore and VISION Cloud.
Examples of file storage services include Amazon Elastic File System (EFS) and Qumulo Core, used for applications that need access to shared files and require a file system. This storage is often supported with a Network Attached Storage (NAS) server, used for large content repositories, development environments, media stores, or user home directories.
A block storage service like Amazon Elastic Block Store (EBS) is used for other enterprise applications like databases and often require dedicated, low latency storage for each host. This is comparable in certain respects to direct attached storage (DAS) or a storage area network (SAN).
Cloud storage is:
Made up of many distributed resources, but still acts as one, either in a federated or a cooperative storage cloud architecture
Highly fault tolerant through redundancy and distribution of data
Highly durable through the creation of versioned copies
Typically eventually consistent with regard to data replicas
Advantages
Companies need only pay for the storage they actually use, typically an average of consumption during a month, quarter, or year. This does not mean that cloud storage is less expensive, only that it incurs operating expenses rather than capital expenses.
Businesses using cloud storage can cut their energy consumption by up to 70% making them a more green business.
Organizations can choose between off-premises and on-premises cloud storage options, or a mixture of the two options, depending on relevant decision criteria that is complementary to initial direct cost savings potential; for instance, continuity of operations (COOP), disaster recovery (DR), security (PII, HIPAA, SARBOX, IA/CND), and records retention laws, regulations, and policies.
Storage availability and data protection is intrinsic to object storage architecture, so depending on the application, the additional technology, effort and cost to add availability and protection can be eliminated.
Storage maintenance tasks, such as purchasing additional storage capacity, are offloaded to the responsibility of a service provider.
Cloud storage provides users with immediate access to a broad range of resources and applications hosted in the infrastructure of another organization via a web service interface.
Cloud storage can be used for copying virtual machine images from the cloud to on-premises locations or to import a virtual machine image from an on-premises location to the cloud image library. In addition, cloud storage can be used to move virtual machine images between user accounts or between data centers.
Cloud storage can be used as natural disaster proof backup, as normally there are 2 or 3 different backup servers located in different places around the globe.
Cloud storage can be mapped as a local drive with the WebDAV protocol. It can function as a central file server for organizations with multiple office locations.
Potential concerns
Data security
Outsourcing data storage increases the attack surface area.
When data has been distributed it is stored at more locations increasing the risk of unauthorized physical access to the data. For example, in cloud based architecture, data is replicated and moved frequently so the risk of unauthorized data recovery increases dramatically. Such as in the case of disposal of old equipment, reuse of drives, reallocation of storage space. The manner that data is replicated depends on the service level a customer chooses and on the service provided. When encryption is in place it can ensure confidentiality. Crypto-shredding can be used when disposing of data (on a disk).
The number of people with access to the data who could be compromised (e.g., bribed, or coerced) increases dramatically. A single company might have a small team of administrators, network engineers, and technicians, but a cloud storage company will have many customers and thousands of servers, therefore a much larger team of technical staff with physical and electronic access to almost all of the data at the entire facility or perhaps the entire company. Decryption keys that are kept by the service user, as opposed to the service provider, limit access to data by service provider employees. As for sharing multiple data in the cloud with multiple users, a large number of keys has to be distributed to users via secure channels for decryption, also it has to be securely stored and managed by the users in their devices. Storing these keys requires rather expensive secure storage. To overcome that, key-aggregate cryptosystem can be used.
It increases the number of networks over which the data travels. Instead of just a local area network (LAN) or storage area network (SAN), data stored on a cloud requires a WAN (wide area network) to connect them both.
By sharing storage and networks with many other users/customers it is possible for other customers to access your data. Sometimes because of erroneous actions, faulty equipment, a bug and sometimes because of criminal intent. This risk applies to all types of storage and not only cloud storage. The risk of having data read during transmission can be mitigated through encryption technology. Encryption in transit protects data as it is being transmitted to and from the cloud service. Encryption at rest protects data that is stored at the service provider. Encrypting data in an on-premises cloud service on-ramp system can provide both kinds of encryption protection.
There are several options available to avoid security issues. One option is to use a private cloud instead of a public cloud. Another option is to ingest data in an encrypted format where the key is held within the on-premise infrastructure. To this end, access is often by use of on-premise cloud storage gateways that have options to encrypt the data prior of transfer.
Longevity
Companies are not permanent and the services and products they provide can change. Outsourcing data storage to another company needs careful investigation and nothing is ever certain. Contracts set in stone can be worthless when a company ceases to exist or its circumstances change. Companies can:
Go bankrupt.
Expand and change their focus.
Be purchased by other larger companies.
Be purchased by a company headquartered in or move to a country that negates compliance with export restrictions and thus necessitates a move.
Suffer an irrecoverable disaster.
Accessibility
Performance for outsourced storage is likely to be lower than local storage, depending on how much a customer is willing to spend for WAN bandwidth
Reliability and availability depends on wide area network availability and on the level of precautions taken by the service provider. Reliability should be based on hardware as well as various algorithms used.
Limitations of Service Level Agreements
Typically, cloud storage Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by other services offered within the same provider. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, service providers typically do not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.
Other concerns
Security of stored data and data in transit may be a concern when storing sensitive data at a cloud storage provider
Users with specific records-keeping requirements, such as public agencies that must retain electronic records according to statute, may encounter complications with using cloud computing and storage. For instance, the U.S. Department of Defense designated the Defense Information Systems Agency (DISA) to maintain a list of records management products that meet all of the records retention, personally identifiable information (PII), and security (Information Assurance; IA) requirements
Cloud storage is a rich resource for both hackers and national security agencies. Because the cloud holds data from many different users and organizations, hackers see it as a very valuable target.
Piracy and copyright infringement may be enabled by sites that permit filesharing. For example, the CodexCloud ebook storage site has faced litigation from the owners of the intellectual property uploaded and shared there, as have the Grooveshark and YouTube sites it has been compared to.
The legal aspect, from a regulatory compliance standpoint, is of concern when storing files domestically and especially internationally.
The resources used to produce large data centers, especially those needed to power them, is causing nations to drastically increase their energy production. This can lead to further climate damaging implications.
Hybrid cloud storage
Hybrid cloud storage is a term for a storage infrastructure that uses a combination of on-premises storage resources with cloud storage. The on-premises storage is usually managed by the organization, while the public cloud storage provider is responsible for the management and security of the data stored in the cloud. Hybrid cloud storage can be implemented by an on-premises cloud storage gateway that presents a file system or object storage interface which the users can access in the same way they would access a local storage system. The cloud storage gateway transparently transfers the data to and from the cloud storage service, providing low latency access to the data through a local cache.
Hybrid cloud storage can be used to supplement an organization's internal storage resources, or it can be used as the primary storage infrastructure. In either case, hybrid cloud storage can provide organizations with greater flexibility and scalability than traditional on-premises storage infrastructure.
There are several benefits to using hybrid cloud storage, including the ability to cache frequently used data on-site for quick access, while inactive cold data is stored off-site in the cloud. This can save space, reduce storage costs and improve performance. Additionally, hybrid cloud storage can provide organizations with greater redundancy and fault tolerance, as data is stored in both on-premises and cloud storage infrastructure.
| Technology | Data storage and memory | null |
17833105 | https://en.wikipedia.org/wiki/Weak%20gravitational%20lensing | Weak gravitational lensing | While the presence of any mass bends the path of light passing near it, this effect rarely produces the giant arcs and multiple images associated with strong gravitational lensing. Most lines of sight in the universe are thoroughly in the weak lensing regime, in which the deflection is impossible to detect in a single background source. However, even in these cases, the presence of the foreground mass can be detected, by way of a systematic alignment of background sources around the lensing mass. Weak gravitational lensing is thus an intrinsically statistical measurement, but it provides a way to measure the masses of astronomical objects without requiring assumptions about their composition or dynamical state.
Methodology
Gravitational lensing acts as a coordinate transformation that distorts the images of background objects (usually galaxies) near a foreground mass. The transformation can be split into two terms, the convergence and shear. The convergence term magnifies the background objects by increasing their size, and the shear term stretches them tangentially around the foreground mass.
To measure this tangential alignment, it is necessary to measure the ellipticities of the background galaxies and construct a statistical estimate of their systematic alignment. The fundamental problem is that galaxies are not intrinsically circular, so their measured ellipticity is a combination of their intrinsic ellipticity and the gravitational lensing shear. Typically, the intrinsic ellipticity is much greater than the shear (by a factor of 3-300, depending on the foreground mass). The measurements of many background galaxies must be combined to average down this "shape noise". The orientation of intrinsic ellipticities of galaxies should be almost entirely random, so any systematic alignment between multiple galaxies can generally be assumed to be caused by lensing.
Another major challenge for weak lensing is correction for the point spread function (PSF) due to instrumental and atmospheric effects, which causes the observed images to be smeared relative to the "true sky". This smearing tends to make small objects more round, destroying some of the information about their true ellipticity. As a further complication, the PSF typically adds a small level of ellipticity to objects in the image, which is not at all random, and can in fact mimic a true lensing signal. Even for the most modern telescopes, this effect is usually at least the same order of magnitude as the gravitational lensing shear, and is often much larger. Correcting for the PSF requires building a model for the telescope for how it varies across the field. Stars in our own galaxy provide a direct measurement of the PSF, and these can be used to construct such a model, usually by interpolating between the points where stars appear on the image. This model can then be used to reconstruct the "true" ellipticities from the smeared ones. Ground-based and space-based data typically undergo distinct reduction procedures due to the differences in instruments and observing conditions.
Angular diameter distances to the lenses and background sources are important for converting the lensing observables to physically meaningful quantities. These distances are often estimated using photometric redshifts when spectroscopic redshifts are unavailable. Redshift information is also important in separating the background source population from other galaxies in the foreground, or those associated with the mass responsible for the lensing. With no redshift information, the foreground and background populations can be split by an apparent magnitude or a color cut, but this is much less accurate.
Weak lensing by clusters of galaxies
Galaxy clusters are the largest gravitationally bound structures in the Universe with approximately 80% of cluster content in the form of dark matter. The gravitational fields of these clusters deflect light-rays traveling near them. As seen from Earth, this effect can cause dramatic distortions of a background source object detectable by eye such as multiple images, arcs, and rings (cluster strong lensing). More generally, the effect causes small, but statistically coherent, distortions of background sources on the order of 10% (cluster weak lensing). Abell 1689, CL0024+17, and the Bullet Cluster are among the most prominent examples of lensing clusters.
History
The effects of cluster strong lensing were first detected by Roger Lynds of the National Optical Astronomy Observatories and Vahe Petrosian of Stanford University who discovered giant luminous arcs in a survey of galaxy clusters in the late 1970s. Lynds and Petrosian published their findings in 1986 without knowing the origin of the arcs. In 1987, Genevieve Soucail of the Toulouse Observatory and her collaborators presented data of a blue ring-like structure in Abell 370 and proposed a gravitational lensing interpretation. The first cluster weak lensing analysis was conducted in 1990 by J. Anthony Tyson of Bell Laboratories and collaborators. Tyson et al. detected a coherent alignment of the ellipticities of the faint blue galaxies behind both Abell 1689 and CL 1409+524. Lensing has been used as a tool to investigate a tiny fraction of the thousands of known galaxy clusters.
Historically, lensing analyses were conducted on galaxy clusters detected via their baryon content (e.g. from optical or X-ray surveys). The sample of galaxy clusters studied with lensing was thus subject to various selection effects; for example, only the most luminous clusters were investigated. In 2006, David Wittman of the University of California at Davis and collaborators published the first sample of galaxy clusters detected via their lensing signals, completely independent of their baryon content. Clusters discovered through lensing are subject to mass selection effects because the more massive clusters produce lensing signals with higher signal-to-noise ratio.
Observational products
The projected mass density can be recovered from the measurement of the ellipticities of the lensed background galaxies through techniques that can be classified into two types: direct reconstruction and inversion. However, a mass distribution reconstructed without knowledge of the magnification suffers from a limitation known as the mass sheet degeneracy, where the cluster surface mass density κ can be determined only up to a transformation where λ is an arbitrary constant. This degeneracy can be broken if an independent measurement of the magnification is available because the magnification is not invariant under the aforementioned degeneracy transformation.
Given a centroid for the cluster, which can be determined by using a reconstructed mass distribution or optical or X-ray data, a model can be fit to the shear profile as a function of clustrocentric radius. For example, the singular isothermal sphere (SIS) profile and the Navarro-Frenk-White (NFW) profile are two commonly used parametric models. Knowledge of the lensing cluster redshift and the redshift distribution of the background galaxies is also necessary for estimation of the mass and size from a model fit; these redshifts can be measured precisely using spectroscopy or estimated using photometry. Individual mass estimates from weak lensing can only be derived for the most massive clusters, and the accuracy of these mass estimates are limited by projections along the line of sight.
Scientific implications
Cluster mass estimates determined by lensing are valuable because the method requires no assumption about the dynamical state or star formation history of the cluster in question. Lensing mass maps can also potentially reveal "dark clusters," clusters containing overdense concentrations of dark matter but relatively insignificant amounts of baryonic matter. Comparison of the dark matter distribution mapped using lensing with the distribution of the baryons using optical and X-ray data reveals the interplay of the dark matter with the stellar and gas components. A notable example of such a joint analysis is the so-called Bullet Cluster. The Bullet Cluster data provide constraints on models relating light, gas, and dark matter distributions such as Modified Newtonian dynamics (MOND) and Λ-Cold Dark Matter (Λ-CDM).
In principle, since the number density of clusters as a function of mass and redshift is sensitive to the underlying cosmology, cluster counts derived from large weak lensing surveys should be able to constrain cosmological parameters. In practice, however, projections along the line of sight cause many false positives. Weak lensing can also be used to calibrate the mass-observable relation via a stacked weak lensing signal around an ensemble of clusters, although this relation is expected to have an intrinsic scatter. In order for lensing clusters to be a precision probe of cosmology in the future, the projection effects and the scatter in the lensing mass-observable relation need to be thoroughly characterized and modeled.
Galaxy-galaxy lensing
Galaxy-galaxy lensing is a specific type of weak (and occasionally strong) gravitational lensing, in which the foreground object responsible for distorting the shapes of background galaxies is itself an individual field galaxy (as opposed to a galaxy cluster or the large-scale structure of the cosmos). Of the three typical mass regimes in weak lensing, galaxy-galaxy lensing produces a "mid-range" signal (shear correlations of ~1%) that is weaker than the signal due to cluster lensing, but stronger than the signal due to cosmic shear.
History
J.A. Tyson and collaborators first postulated the concept of galaxy-galaxy lensing in 1984, though the observational results of their study were inconclusive. It was not until 1996 that evidence of such distortion was tentatively discovered, with the first statistically significant results not published until the year 2000. Since those initial discoveries, the construction of larger, high resolution telescopes and the advent of dedicated wide field galaxy surveys have greatly increased the observed number density of both background source and foreground lens galaxies, allowing for a much more robust statistical sample of galaxies, making the lensing signal much easier to detect. Today, measuring the shear signal due to galaxy-galaxy lensing is a widely used technique in observational astronomy and cosmology, often used in parallel with other measurements in determining physical characteristics of foreground galaxies.
Stacking
Much like in cluster-scale weak lensing, detection of a galaxy-galaxy shear signal requires one to measure the shapes of background source galaxies, and then look for statistical shape correlations (specifically, source galaxy shapes should be aligned tangentially, relative to the lens center.) In principle, this signal could be measured around any individual foreground lens. In practice, however, due to the relatively low mass of field lenses and the inherent randomness in intrinsic shape of background sources (the "shape noise"), the signal is impossible to measure on a galaxy-by-galaxy basis. However, by combining the signals of many individual lens measurements together (a technique known as "stacking"), the signal-to-noise ratio will improve, allowing one to determine a statistically significant signal, averaged over the entire lens set.
Scientific applications
Galaxy-galaxy lensing (like all other types of gravitational lensing) is used to measure several quantities pertaining to mass:
Mass density profiles Using techniques similar to those in cluster-scale lensing, galaxy-galaxy lensing can provide information about the shape of mass density profiles, though these profiles correspond to galaxy-sized objects instead of larger clusters or groups. Given a high enough number density of background sources, a typical galaxy-galaxy mass density profile can cover a wide range of distances (from ~1 to ~100 effective radii). Since the effects of lensing are insensitive to the matter type, a galaxy-galaxy mass density profile can be used to probe a wide range of matter environments: from the central cores of galaxies where baryons dominate the total mass fraction, to the outer halos where dark matter is more prevalent.
Mass-to-light ratios Comparing the measured mass to the luminosity (averaged over the entire galaxy stack) in a specific filter, galaxy-galaxy lensing can also provide insight into the mass to light ratios of field galaxies. Specifically, the quantity measured through lensing is the total (or virial) mass to light ratio – again due to the insensitivity of lensing to matter type. Assuming that luminous matter can trace dark matter, this quantity is of particular importance, since measuring the ratio of luminous (baryonic) matter to total matter can provide information regarding the overall ratio of baryonic to dark matter in the universe.
Galaxy mass evolution Since the speed of light is finite, an observer on the Earth will see distant galaxies not as they look today, but rather as they appeared at some earlier time. By restricting the lens sample of a galaxy-galaxy lensing study to lie at only one particular redshift, it is possible to understand the mass properties of the field galaxies that existed during this earlier time. Comparing the results of several such redshift-restricted lensing studies (with each study encompassing a different redshift), one can begin to observe changes in the mass features of galaxies over a period of several epochs, leading towards a better understanding of the evolution of mass on the smallest cosmological scales.
Other mass trends Lens redshift is not the only quantity of interest that can be varied when studying mass differences between galaxy populations, and often there are several parameters used when segregating objects into galaxy-galaxy lens stacks. Two widely used criteria are galaxy color and morphology, which act as tracers of (among other things) stellar population, galaxy age, and local mass environment. By separating lens galaxies based on these properties, and then further segregating samples based on redshift, it is possible to use galaxy-galaxy lensing to see how several different types of galaxies evolve through time.
Cosmic shear
The gravitational lensing by large-scale structure also produces intrinsic alignment (IA) - an observable pattern of alignments in background galaxies. This distortion is only ~0.1%-1% - much more subtle than cluster or galaxy-galaxy lensing. The thin lens approximation usually used in cluster and galaxy lensing does not always work in this regime, because structures can be elongated along the line of sight. Instead, the distortion can be derived by assuming that the deflection angle is always small (see Gravitational Lensing Formalism). As in the thin lens case, the effect can be written as a mapping from the unlensed angular position to the lensed position . The Jacobian of the transform can be written as an integral over the gravitational potential along the line of sight
where is the comoving distance, are the transverse distances, and
is the lensing kernel, which defines the efficiency of lensing for a distribution of sources .
As in the thin-lens approximation, the Jacobian can be decomposed into shear and convergence terms.
Shear correlation functions
Because large-scale cosmological structures do not have a well-defined location, detecting cosmological gravitational lensing typically involves the computation of shear correlation functions, which measure the mean product of the shear at two points as a function of the distance between those points. Because there are two components of shear, three different correlation functions can be defined:
where is the component along or perpendicular to , and is the component at 45°. These correlation functions are typically computed by averaging over many pairs of galaxies. The last correlation function, , is not affected at all by lensing, so measuring a value for this function that is inconsistent with zero is often interpreted as a sign of systematic error.
The functions and can be related to projections (integrals with certain weight functions) of the dark matter density correlation function, which can be predicted from theory for a cosmological model through its Fourier transform, the matter power spectrum.
Because they both depend on a single scalar density field, and are not independent, and they can be decomposed further into E-mode and B-mode correlation functions. In analogy with electric and magnetic fields, the E-mode field is curl-free and the B-mode field is divergence-free. Because gravitational lensing can only produce an E-mode field, the B-mode provides yet another test for systematic errors.
The E-mode correlation function is also known as the aperture mass variance
where and are Bessel Functions.
An exact decomposition thus requires knowledge of the shear correlation functions at zero separation, but an approximate decomposition is fairly insensitive to these values because the filters and are small near .
Weak lensing and cosmology
The ability of weak lensing to constrain the matter power spectrum makes it a potentially powerful probe of cosmological parameters, especially when combined with other observations such as the cosmic microwave background, supernovae, and galaxy surveys. Detecting the extremely faint cosmic shear signal requires averaging over many background galaxies, so surveys must be both deep and wide, and because these background galaxies are small, the image quality must be very good. Measuring the shear correlations at small scales also requires a high density of background objects (again requiring deep, high quality data), while measurements at large scales push for wider surveys.
While weak lensing of large-scale structure was discussed as early as 1967, due to the challenges mentioned above, it was not detected until more than 30 years later when large CCD cameras enabled surveys of the necessary size and quality. In 2000, four independent groups published the first detections of cosmic shear, and subsequent observations have started to put constraints on cosmological parameters (particularly the dark matter density and power spectrum amplitude ) that are competitive with other cosmological probes.
For current and future surveys, one goal is to use the redshifts of the background galaxies (often approximated using photometric redshifts) to divide the survey into multiple redshift bins. The low-redshift bins will only be lensed by structures very near to us, while the high-redshift bins will be lensed by structures over a wide range of redshift. This technique, dubbed "cosmic tomography", makes it possible to map out the 3D distribution of mass. Because the third dimension involves not only distance but cosmic time, tomographic weak lensing is sensitive not only to the matter power spectrum today, but also to its evolution over the history of the universe, and the expansion history of the universe during that time. This is a much more valuable cosmological probe, and many proposed experiments to measure the properties of dark energy and dark matter have focused on weak lensing, such as the Dark Energy Survey, Pan-STARRS, and the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory.
Weak lensing also has an important effect on the Cosmic Microwave Background and diffuse 21cm line radiation. Even though there are no distinct resolved sources, perturbations on the origining surface are sheared in a similar way to galaxy weak lensing, resulting in changes to the power spectrum and statistics of the observed signal. Since the source plane for the CMB and high-redshift diffuse 21 cm are at higher redshift than resolved galaxies, the lensing effect probes cosmology at higher redshifts than galaxy lensing.
Negative weak lensing
Minimal coupling of general relativity with scalar fields allows solutions like traversable wormholes stabilized by exotic matter of negative energy density. Moreover, Modified Newtonian Dynamics as well as some bimetric theories of gravity consider invisible negative mass in cosmology as an alternative interpretation to dark matter, which classically has a positive mass.
As the presence of exotic matter would bend spacetime and light differently than positive mass, a Japanese team at the Hirosaki University proposed to use "negative" weak gravitational lensing related to such negative mass.
Instead of running statistical analysis on the distortion of galaxies based on the assumption of a positive weak lensing that usually reveals locations of positive mass "dark clusters", these researchers propose to locate "negative mass clumps" using negative weak lensing, i.e. where the deformation of galaxies is interpreted as being due to a diverging lensing effect producing radial distortions (similar to a concave lens instead of the classical azimuthal distortions of convex lenses similar to the image produced by a fisheye). Such negative mass clumps would be located elsewhere than assumed dark clusters, as they would reside at the center of observed cosmic voids located between galaxy filaments within the lacunar, web-like large-scale structure of the universe. Such test based on negative weak lensing could help to falsify cosmological models proposing exotic matter of negative mass as an alternative interpretation to dark matter.
| Physical sciences | Basics_2 | Astronomy |
17842246 | https://en.wikipedia.org/wiki/Fold%20mountains | Fold mountains | Fold mountains are formed by the effects of folding on layers within the upper part of the Earth's crust. Before the development of the theory of plate tectonics and before the internal architecture of thrust belts became well understood, the term was used to describe most mountain belts but has otherwise fallen out of use.
Formation
Fold mountains form in areas of thrust tectonics, such as where two tectonic plates move towards each other at convergent plate boundary. When plates and the continents riding on them collide or undergo subduction (that is – ride one over another), the accumulated layers of rock may crumple and fold like a tablecloth that is pushed across a table, particularly if there is a mechanically weak layer such as salt. Since the less dense continental crust "floats" on the denser mantle rocks beneath, the weight of any crustal material forced upward to form hills, plateaus or mountains must be balanced by the buoyancy force of a much greater volume forced downward into the mantle. Thus the continental crust is normally much thicker under mountains, compared to lower-lying areas. Rock can fold either symmetrically or asymmetrically. The upfolds are anticlines and the downfolds are synclines. Severely folded and faulted rocks are called nappes. In asymmetric folding there may also be recumbent and overturned folds. The mountains such formed are usually greater in length instead of breadth.
Examples
The Jura mountains – A series of sub-parallel mountainous ridges that formed by folding over a Triassic evaporite decollement due to thrust movements in the foreland of the Alps
The 'Simply Folded Belt' of the Zagros Mountains – A series of elongated anticlinal domes, mostly formed as detachment folds over underlying thrusts in the foreland of the Zagros collisional belt, generally above a basal decollement that formed in evaporite of the late Neoproterozoic to Early Cambrian Hormuz Formation
The Akwapim-Togo ranges in Ghana
The Ridge-and-Valley Appalachians in the eastern part of United States.
The Ouachita Mountains of Arkansas and Oklahoma.
| Physical sciences | Montane landforms | Earth science |
14952458 | https://en.wikipedia.org/wiki/Flag%20semaphore | Flag semaphore | Flag semaphore (from the Ancient Greek () 'sign' and - (-) '-bearer') is a semaphore system conveying information at a distance by means of visual signals with hand-held flags, rods, disks, paddles, or occasionally bare or gloved hands. Information is encoded by the position of the flags; it is read when the flag is in a fixed position. Semaphores were adopted and widely used (with hand-held flags replacing the mechanical arms of shutter semaphores) in the maritime world in the 19th century. It is still used during underway replenishment at sea and is acceptable for emergency communication in daylight or using lighted wands instead of flags, at night.
Contemporary semaphore flag system
The current flag semaphore system uses two short poles with square flags, which a signal person holds in different positions to signal letters of the alphabet and numbers. The signaller holds one pole in each hand, and extends each arm in one of eight possible directions. Except for in the rest position, the flags do not overlap. The flags are colored differently based on whether the signals are sent by sea or by land. At sea, the flags are colored red and yellow (the Oscar flag), while on land, they are white and blue (the Papa flag). Flags are not required; their purpose is to make the characters more obvious.
Characters
The following 30 semaphore characters are presented as they would appear when facing the signalperson:
Numbers can be signaled by first signaling "Numerals". Letters can be signaled by first signaling "J".
The sender uses the "Attention" signal to request permission to begin a transmission. The receiver uses a "Ready to receive" signal not shown above to grant permission to begin the transmission. The receiver raises both flags vertical overhead and then drops them to the rest position, once only, to grant permission to send. The sender ends the transmission with the "Ready to receive" signal. The receiver can reply with the "Attention" signal. At this point, sender and receiver change places.
Origin
Flag semaphore originated in 1866 as a handheld version of the optical telegraph system of Home Riggs Popham used on land, and its later improvement by Charles Pasley. The land system consisted of lines of fixed stations (substantial buildings) with two large, moveable arms pivoted on an upright member. Such a system was inconvenient to install on board a ship. Flag semaphore provided an easy method of communicating ship-to-ship or ship-to-shore when the distances were not too great. According to Alexander J. Field of Santa Clara University, "there is evidence" that Popham based his telegraph on the French coastal stations used for ship-to-shore communication. Many of the codepoints of flag semaphore match those of the Foy-Breguet electrical telegraph, also descended from the French optical telegraph. Although based on the optical telegraph, by the time flag semaphore was introduced the optical telegraph had been entirely replaced by the electrical telegraph some years previously.
Japanese semaphore
The Japanese merchant marine and armed services have adapted the flag semaphore system to the Japanese language. Because their writing system involves a syllabary of about twice the number of characters in the Latin alphabet, most characters take two displays of the flags to complete; others need three and a few only one. The flags are specified as a solid white rectangle for the left hand and a solid red one for the right. The display motions chosen are not like the "rotary dial" system used for the Latin alphabet letters and numbers; rather, the displays represent the angles of the brush strokes used in writing in the katakana syllabary and in the order drawn. For example, the character for "O" [オ], which is drawn first with a horizontal line from left to right, then a vertical one from top to bottom, and finally a slant between the two; follows that form and order of the arm extensions. It is the right arm, holding the red flag, which moves as a pen would, but in mirror image so that the observer sees the pattern normally. As in telegraphy, the katakana syllabary is the one used to write down the messages as they are received. Also, the Japanese system presents the number 0 by moving flags in a circle, and those from 1 through 9 using a sort of the "rotary dial" system, but different from that used for European languages.
Practical use in communication
Semaphore flags are also sometimes used as means of communication in the mountains where oral or electronic communication is difficult to perform. Although they do not carry flags, the Royal Canadian Mounted Police officers have used hand semaphore in this manner. Some surf-side rescue companies, such as the Ocean City, Maryland Beach Patrol, use semaphore flags to communicate between lifeguards. The letters of the flag semaphore are also a common artistic motif. One enduring example is the peace symbol, adopted by the Campaign for Nuclear Disarmament in 1958 from the original logo created by a commercial artist named Gerald Holtom from Twickenham, London, using the semaphore for N and D. Holtom designed the logo for use on a protest march on the Atomic Weapons Establishment at Aldermaston, near Newbury, England. On 4 April 1958, the march left Trafalgar Square for rural Berkshire, carrying Ban the Bomb placards made by Holtom's children making it the first use of the symbol. Originally, it was purple and white and signified a combination of the semaphoric letters N and D, standing for "nuclear disarmament", circumscribed by a circle.
Along with Morse code, flag semaphore is currently used by the US Navy and also continues to be a subject of study and training for young people of Scouts. In a satirical nod to the flag semaphore's enduring use into the age of the Internet, on April Fools' Day 2007 the Internet Engineering Task Force standards organization outlined the Semaphore Flag Signaling System, a method of transmitting Internet traffic via a chain of flag semaphore operators.
Use in popular culture
The album cover for the Beatles' 1965 album Help! was originally to have portrayed the four band members spelling "help" in semaphore, but the result was deemed aesthetically unpleasing, and their arms were instead positioned in a meaningless but aesthetically pleasing arrangement.
In the 1960s poet Hannah Weiner composed poems using flag semaphore and the International Code of Signals, including a version of William Shakespeare's Romeo and Juliet titled "R+J." In 1968, these works were performed by off-duty U.S. Coast Guard signalers in Central Park.
The second episode in the second series of Monty Python's Flying Circus depicted the Emily Brontë novel Wuthering Heights enacted in semaphore.
The Swallows and Amazons series by Arthur Ransome has the characters using flag semaphore to exchange messages, both live and as concealed messages in drawings (many of which are included in the books as illustrations) with the complete semaphore alphabet included as an illustration in both Winter Holiday and Secret Water.
| Technology | Telecommunications | null |
4374425 | https://en.wikipedia.org/wiki/Optical%20pumping | Optical pumping | Optical pumping is a process in which light is used to raise (or "pump") electrons from a lower energy level in an atom or molecule to a higher one. It is commonly used in laser construction to pump the active laser medium so as to achieve population inversion. The technique was developed by the 1966 Nobel Prize winner Alfred Kastler in the early 1950s.
Optical pumping is also used to cyclically pump electrons bound within an atom or molecule to a well-defined quantum state. For the simplest case of coherent two-level optical pumping of an atomic species containing a single outer-shell electron, this means that the electron is coherently pumped to a single hyperfine sublevel (labeled ), which is defined by the polarization of the pump laser along with the quantum selection rules. Upon optical pumping, the atom is said to be oriented in a specific sublevel, however, due to the cyclic nature of optical pumping, the bound electron will actually be undergoing repeated excitation and decay between the upper and lower state sublevels. The frequency and polarization of the pump laser determine the sublevel in which the atom is oriented.
In practice, completely coherent optical pumping may not occur due to power-broadening of the linewidth of a transition and undesirable effects such as hyperfine structure trapping and radiation trapping. Therefore the orientation of the atom depends more generally on the frequency, intensity, polarization, and spectral bandwidth of the laser as well as the linewidth and transition probability of the absorbing transition.
An optical pumping experiment is commonly found in physics undergraduate laboratories, using rubidium gas isotopes and displaying the ability of radiofrequency (MHz) electromagnetic radiation to effectively pump and unpump these isotopes.
| Physical sciences | Atomic physics | Physics |
4375088 | https://en.wikipedia.org/wiki/Stegosauria | Stegosauria | Stegosauria is a group of herbivorous ornithischian dinosaurs that lived during the Jurassic and early Cretaceous periods. Stegosaurian fossils have been found mostly in the Northern Hemisphere (North America, Europe and Asia), Africa and possibly South America. Their geographical origins are unclear; the earliest unequivocal stegosaurian, Bashanosaurus primitivus, was found in the Bathonian Shaximiao Formation of China.
Stegosaurians were armored dinosaurs (thyreophorans). Originally, they did not differ much from more primitive members of that group, being small, low-slung, running animals protected by armored scutes. An early evolutionary innovation was the development of spikes as defensive weapons. Later species, belonging to a subgroup called the Stegosauridae, became larger, and developed long hindlimbs that no longer allowed them to run. This increased the importance of active defence by the thagomizer, which could ward off even large predators because the tail was in a higher position, pointing horizontally to the rear from the broad pelvis. Stegosaurids had complex arrays of spikes and plates running along their backs, hips and tails.
Stegosauria includes two families, the primitive Huayangosauridae and the more derived Stegosauridae. The stegosaurids like all other stegosaurians were quadrupedal herbivores that exhibited the characteristic stegosaurian dorsal dermal plates. These large, thin, erect plates are thought to be aligned parasagittally from the neck to near the end of the tail. The end of the tail has pairs of spikes, sometimes referred to as a thagomizer. Although defense, thermo-regulation and display have been theorized to be the possible functions of these dorsal plates, a study of the ontogenetic histology of the plates and spikes suggests that the plates serve different functions at different stages of the stegosaurids' life histories. The terminal spikes in the tail are thought to have been used in old adults, at least, as a weapon for defence. However, the function of stegosaurid plates and spikes, at different life stages, still remains a matter of great debate.
The first stegosaurian finds in the early 19th century were fragmentary. Better fossil material, of the genus Dacentrurus, was discovered in 1874 in England. Soon after, in 1877, the first nearly-complete skeleton was discovered in the United States. Professor Othniel Charles Marsh that year classified such specimens in the new genus Stegosaurus, from which the group acquired its name, and which is still by far the most famous stegosaurian. During the latter half of the twentieth century, many important Chinese finds were made, representing about half of the presently known diversity of stegosaurians.
History of research
The first known discovery of a possible stegosaurian was probably made in the early nineteenth century in England. It consisted of a lower jaw fragment and was in 1848 named Regnosaurus. In 1845, in the area of the present state of South Africa, remains were discovered that much later would be named Paranthodon. In 1874, other remains from England were named Craterosaurus. All three taxa were based on fragmentary material and were not recognised as possible stegosaurians until the twentieth century. They gave no reason to suspect the existence of a new distinctive group of dinosaurs.
In 1874, extensive remains of what was clearly a large herbivore equipped with spikes were uncovered in England; the first partial stegosaurian skeleton known. They were named Omosaurus by Richard Owen in 1875. Later, this name was shown to be preoccupied by the phytosaur Omosaurus and the stegosaurian was renamed Dacentrurus. Other English nineteenth century and early twentieth century finds would be assigned to Omosaurus; later they would, together with French fossils, be partly renamed Lexovisaurus and Loricatosaurus.
In 1877, Arthur Lakes, a fossil hunter working for Professor Othniel Charles Marsh, in Wyoming excavated a fossil that Marsh the same year named Stegosaurus. At first, Marsh still entertained some incorrect notions about its morphology. He assumed that the plates formed a flat skin cover — hence the name, meaning "roof saurian" — and that the animal was bipedal with the spikes sticking out sideways from the rear of the skull. A succession of additional discoveries from the Como Bluff sites allowed a quick update of the presumed build. In 1882, Marsh was able to publish the first skeletal reconstruction of a stegosaur. Hereby, stegosaurians became much better known to the general public. The American finds at the time represented the bulk of known stegosaurian fossils, with about twenty skeletons collected.
The next important discovery was made when a German expedition to the Tendaguru, then part of German East Africa, from 1909 to 1912 excavated over a thousand bones of Kentrosaurus. The finds increased the known variability of the group, Kentrosaurus being rather small and having long rows of spikes on the hip and tail.
From the 1950s onwards, the geology of China was systematically surveyed in detail and infrastructural works led to a vast increase of digging activities in that country. This resulted in a new wave of Chinese stegosaurian discoveries, starting with Chialingosaurus in 1957. Chinese finds of the 1970s and 1980s included Wuerhosaurus, Tuojiangosaurus, Chungkingosaurus, Huayangosaurus, Yingshanosaurus and Gigantspinosaurus. This increased the age range of good fossil stegosaurian material, as they represented the first relatively complete skeletons from the Middle Jurassic and the Early Cretaceous. Especially important was Huayangosaurus, which provided unique information about the early evolution of the group.
Towards the end of the twentieth century, the so-called Dinosaur Renaissance took place in which a vast increase in scientific attention was given to the Dinosauria. In 2007, Jiangjunosaurus was reported, the first Chinese dinosaur named since 1994. Nevertheless, European and North-American sites have become productive again during the 1990s, Miragaia having been found in the Lourinhã Formation in Portugal and a number of relatively complete Hesperosaurus skeletons having been excavated in Wyoming. Apart from the fossils per se, important new insights have been gained by applying the method of cladistics, allowing for the first time to exactly calculate stegosaurian evolutionary relationships.
Description
Stegosaurids are distinguished from other stegosaurians in that the former have lost the plesiomorphic pre-maxillary teeth and lateral scute rows along the trunk. Furthermore, stegosaurids have long narrow skulls and longer hindlimbs compared to their forelimbs. However, these two features are not diagnostic of Stegosauridae because they may also be present in non-stegosaurid stegosaurians.
Skull
Stegosaurians had characteristic small, long, flat, narrow heads and a horn-covered beak or rhamphotheca, which covered the front of the snout (two premaxillaries) and lower jaw (a single predentary) bones. Similar structures are seen in turtles and birds. Apart from Huayangosaurus, stegosaurians subsequently lost all premaxillary teeth within the upper beak. Huayangosaurus still had seven per side. The upper and lower jaws are equipped with rows of small teeth. Later species have a vertical bone plate covering the outer side of the lower jaw teeth. The structure of the upper jaw, with a low ridge above, and running parallel to, the tooth row, indicates the presence of a fleshy cheek. In stegosaurians, the typical archosaurian skull opening, the antorbital fenestra in front of the eye socket, is small, sometimes reduced to a narrow horizontal slit. In general, stegosaurids have proportionally long, low and narrow snouts with a deep mandible, compared to that of Huayangosaurus. Stegosaurids also lack premaxillary teeth.
Postcranial skeleton
All stegosaurians are quadrupedal, with hoof-like toes on all four limbs. All stegosaurians after Huayangosaurus have forelimbs much shorter than their hindlimbs. Their hindlimbs are long and straight, designed to carry the weight of the animal while stepping. The condyles of the lower thighbone are short from the front to the rear. This would have limited the supported rotation of the knee joint, making running impossible. Huayangosaurus had a thighbone like a running animal. The upper leg was always longer than the lower leg.
Huayangosaurus had relatively long and slender arms. The forelimbs of later forms are very robust, with a massive humerus and ulna. The wrist bones were reinforced by a fusion into two blocks, an ulnar and a radial. The front feet of stegosaurians are commonly depicted in art and in museum displays with fingers splayed out and slanted downward. However, in this position, most bones in the hand would be disarticulated. In reality, the hand bones of stegosaurians were arranged into vertical columns, with the main fingers, orientated outwards, forming a tube-like structure. This is similar to the hands of sauropod dinosaurs, and is also supported by evidence from stegosaurian footprints and fossils found in a lifelike pose.
The long hindlimbs elevated the tail base, such that the tail pointed out behind the animal almost horizontally from that high position. While walking, the tail would not have sloped downwards as this would have impeded the function of the tail base retractor muscles, to pull the thighbones backwards. However, it has been suggested by Robert Thomas Bakker that stegosaurians could rear on their hind legs to reach higher layers of plants, the tail then being used as a "third leg". The mobility of the tail was increased by a reduction or absence of ossified tendons, that with many Ornithischia stiffen the hip region. Huayangosaurus still possessed them. In species that had short forelimbs, the relatively short torso towards the front curved strongly downwards. The dorsal vertebrae typically were very high, with very tall neural arches and transverse processes pointing obliquely upwards to almost the level of the neural spine top. Stegosaurian back vertebrae can easily be identified by this unique configuration. The tall neural arches often house deep neural canals; enlarged canals in the sacral vertebrae have given rise to the incorrect notion of a "second brain". Despite the downwards curvature of the rump, the neck base was not very low and the head was held a considerable distance off the ground. The neck was flexible and moderately long. Huayangosaurus still had the probably original number of nine cervical vertebrae; Miragaia has an elongated neck with seventeen.
The stegosaurian shoulder girdle was very robust. In Huayangosaurus, the acromion, a process on the lower front edge of the shoulderblade, was moderately developed; the coracoid was about as wide as the lower end of the scapula, with which it formed the shoulder joint. Later forms tend to have a strongly expanded acromion, while the coracoid, largely attached to the acromion, no longer extends to the rear lower corner of the scapula.
The stegosaurian pelvis was originally moderately large, as shown by Huayangosaurus. Later species, however, convergent to the Ankylosauria developed very broad pelves, in which the iliac bones formed wide horizontal plates with flaring front blades to allow for an enormous belly-gut. The ilia were attached to the sacral vertebrae via a sacral yoke formed by fused sacral ribs. Huayangosaurus still had rather long and obliquely oriented ischia and pubic bones. In more derived species, these became more horizontal and shorter to the rear, while the front prepubic process lengthened.
Armor and ornamention
Like all Thyreophora, stegosaurians were protected by bony scutes that were not part of the skeleton proper but skin ossifications instead: the so-called osteoderms. Huayangosaurus had several types. On its neck, back, and tail were two rows of paired small vertical plates and spikes. The very tail end bore a small club. Each flank had a row of smaller osteoderms, culminating in a long shoulder spine in front, curving to the rear. Later forms show very variable configurations, combining plates of various shape and size on the neck and front torso with spikes more to the rear of the animal. They seem to have lost the tail club and the flank rows are apparently absent also, with the exception of the shoulder spine, still shown by Kentrosaurus and extremely developed, as its name indicates, in Gigantspinosaurus. As far as is known, all forms possessed some sort of thagomizer, though these are rarely preserved articulated allowing to establish the exact arrangement. A fossil of Chungkingosaurus sp. has been reported with three pairs of spikes pointing outwards and a fourth pair pointing to the rear. The most derived species, like Stegosaurus, Hesperosaurus and Wuerhosaurus, have very large and flat back plates. Stegosaurid plates have a thick base and central portion, but are transversely thin elsewhere. The plates become remarkably large and thin in Stegosaurus. They are found in varying sizes along the dorsum, with the central region of the back usually having the largest and tallest plates. The arrangement of these parasagittal dorsal plates has been intensely debated in the past. Discoverer Othniel Charles Marsh suggested a single median row of plates running post-cranially along the longitudinal axis and Lull argued in favour of bilaterally paired arrangement throughout the series. Current scientific consensus lies in the arrangement proposed by Gilmore - two parasagittal rows of staggered alternates, after the discovery of an almost complete skeleton preserved in this manner in rock. Furthermore, no two plates share the same size and shape, making the possibility of bilaterally paired rows even less likely. Plates are usually found with distinct vascular grooves on their lateral surfaces, suggesting the presence of a circulatory network. Stegosaurids also have osteoderms on the throat in the form of small depressed ossicles and two pairs of elongated spike-like tail-spines. With Stegosaurus fossils also ossicles have been found in the throat region, bony skin discs that protected the lower neck.
Many basal stegosaurs like Gigantspinosaurus and Huayangosaurus have been discovered with parascapular spines, or spines emerging from the shoulder region. Among stegosaurids, only Kentrosaurus has been found with parascapular spines, which project posteriorly out of the lower part of the shoulder plates. These spines are long, rounded and comma-shaped in lateral view and have an enlarged base. Loricatosaurus was also believed to have a parascapular spine, but Maidment et al. (2008) observed that the discovered specimen, from which the spine is described, has a completely different morphology than the parascapular spine specimens of other stegosaurs. They suggest it may be a fragmentary tail spine instead. Stegosaurids also lack lateral scute rows that run longitudinally on either side of the trunk in Huayangosaurus and ankylosaurs, indicating yet another secondary loss of a plesiomorphic characters. However, the absence of lateral scutes as well as pre-maxillary teeth mentioned above are not specifically diagnostic of stegosaurids, since these features are also present in some other stegosaurians, whose phylogenetic relationships are unclear.
The discovery of an impression of the skin covering the dorsal plates has implications for all possible functions of stegosaurian plates. Christiansen and Tschopp (2010) found that the skin was smooth with long, parallel, shallow grooves indicating a keratinous structure covering the plates. The addition of beta-keratin, a strong protein, would indeed allow the plates to bear more weight, suggesting they may have been used for active defense. A keratinous covering would also allow greater surface area for the plates to be uses as a mating display structures, which could be potentially coloured like the beaks of modern birds. At the same time this finding implies that the use of plates for thermo-regulation may be less likely because the keratinous covering would make heat transfer from the bone highly ineffective.
Classification
In 1877, Othniel Marsh discovered and named Stegosaurus armatus, from which the name of the family 'Stegosauridae' was erected in 1880. In comparison to basal stegosaurians, notable synapomorphies of Stegosauridae include a large antitrochanter (supracetabular process) in the ilium, a long prepubic process and long femur relative to the length of the humerus. Furthermore, stegosaurid sacral ribs are T-shaped in parasagittal cross-section and the dorsal vertebrae have an elongated neural arch. The first exact clade definition of Stegosauria was given by Peter Malcolm Galton in 1997: all thyreophoran Ornithischia more closely related to Stegosaurus than to Ankylosaurus. This definition was formalized in the PhyloCode by Daniel Madzia and colleagues in 2021 as "the largest clade containing Stegosaurus stenops, but not Ankylosaurus magniventris". Thus defined, the Stegosauria are by definition the sister group of the Ankylosauria within the Eurypoda. The vast majority of stegosaurian dinosaurs thus far recovered belong to the Stegosauridae, which lived in the later part of the Jurassic and early Cretaceous, and which were defined by Paul Sereno as all stegosaurians more closely related to Stegosaurus than to Huayangosaurus. This definition was also formalized in the PhyloCode by Daniel Madzia and colleagues in 2021 as "the largest clade containing Stegosaurus stenops, but not Huayangosaurus taibaii". They include per definition the well-known Stegosaurus. This group is widespread, with members across the Northern Hemisphere, Africa and possibly South America.
Huayangosauridae (derived from Huayangosaurus, "Huayang reptile") is a family of stegosaurian dinosaurs from the Jurassic of China. The group is defined as all taxa closer to the namesake genus Huayangosaurus than Stegosaurus, and was originally named as the family Huayangosaurinae by Dong Zhiming and colleagues in the description of Huayangosaurus. Huayangosaurinae was originally differentiated by the remaining taxa within Stegosauridae by the presence of teeth in the , an , and a . Huayangosaurinae, known from the Middle Jurassic of the Shaximiao Formation, was proposed to be intermediate between Scelidosaurinae and Stegosaurinae, suggesting that the origins of stegosaurs lay in Asia. Following phylogenetic analyses, Huayangosauridae was expanded to also include the taxon Chungkingosaurus, known from specimens from younger Late Jurassic deposits of the Shaximiao Formation. Huayangosauridae is either the sister taxon to all other stegosaurs, or close to the origin of the clade, with taxa like Gigantspinosaurus or Isaberrysaura outside the Stegosauridae-Huayangosauridae split. Huayangosauridae was formally defined in 2021 by Daniel Madzia and colleagues, who used the previous definitions of all taxa closer to Huayangosaurus taibaii than Stegosaurus stenops.
In 2017, Raven and Maidment published a new phylogenetic analysis, including almost every known stegosaurian genus:
Undescribed species
To date, several genera from China bearing names have been proposed but not formally described, including "Changdusaurus". Until formal descriptions are published, these genera are regarded as nomina nuda. Yingshanosaurus, for a long time considered a nomen nudum, was described in 1994.
Evolutionary history
Like the spikes and shields of ankylosaurs, the bony plates and spines of stegosaurians evolved from the low-keeled osteoderms characteristic of basal thyreophorans. One such described genus, Scelidosaurus, is proposed to be morphologically close to the last common ancestor of the clade uniting stegosaurians and ankylosaurians, the Eurypoda. Galton (2019) interpreted plates of an armored dinosaur from the Lower Jurassic (Sinemurian-Pliensbachian) Lower Kota Formation of India as fossils of a member of Ankylosauria; the author argued that this finding indicates a probable early Early Jurassic origin for both Ankylosauria and its sister group Stegosauria. Footprints attributed to the ichnotaxon Deltapodus brodricki from the Middle Jurassic (Aalenian) of England represent the oldest probable record of stegosaurians reported so far. Outside that, there are assigned fossils to stegosauria from the Toarcian: the specimen "IVPP V.219", a chimaera with bones of the sauropod Sanpasaurus is known from the Maanshan Member of the Ziliujing Formation. The earliest possible trackways of stegosaurians are discovered from the Hettangian-aged deposits of France, indicating a possibly earlier origin. The perhaps most basal known stegosaurian, the four-metre-long Huayangosaurus, is still close to Scelidosaurus in build, with a higher and shorter skull, a short neck, a low torso, long slender forelimbs, short hindlimbs, large condyles on the thighbone, a narrow pelvis, long ischial and pubic shafts, and a relatively long tail. Its small tail club might be a eurypodan synapomorphy. Huayangosaurus lived during the Bathonian stage of the Middle Jurassic, about 166 million years ago.
A few million years later, during the Callovian-Oxfordian, from China much larger species are known, with long, "graviportal" (adapted for moving only in a slow manner on land due to a high body weight) hindlimbs: Chungkingosaurus, Chialingosaurus, Tuojiangosaurus and Gigantspinosaurus. Most of these are considered members of the derived Stegosauridae. Lexovisaurus and Loricatosaurus, stegosaurid finds from England and France of approximately equivalent age to the Chinese specimens, are likely the same taxon. During the Late Jurassic, stegosaurids seem to have experienced their greatest radiation. In Europe, Dacentrurus and the closely related Miragaia were present. While older finds had been limited to the northern continents, in this phase Gondwana was colonised also as shown by Kentrosaurus living in Africa. No unequivocal stegosaurian fossils have been reported from South-America, India, Madagascar, Australia, or Antarctica, though. A Late Jurassic Chinese stegosaurian is Jiangjunosaurus. The most derived Jurassic stegosaurians are known from North-America: Stegosaurus (perhaps several species thereof) and the somewhat older Hesperosaurus. Stegosaurus was quite large (some specimens indicate a length of at least seven metres), had high plates, no shoulder spine, and a short, deep rump.
From the Early Cretaceous, far fewer finds are known and it seems that the group had declined in diversity. Some fragmentary fossils have been described, such as Craterosaurus from England and Paranthodon from South Africa. Up until recently, the only substantial discoveries were those of Wuerhosaurus from Northern China, the exact age of which is highly uncertain More recent discoveries from Asia however would later begin to fill out the Early Cretaceous diversity of the group. Indeterminate stegosaurs are known from the Early Cretaceous of Siberia, including the Ilek Formation and Batylykh Formation. The youngest known definitive remains of stegosaurs are those of Mongolostegus from Mongolia, possibly Stegosaurus from the Hekou Group of China, and Yanbeilong of the Zuoyun Formation of China, all of which date to the Aptian-Albian.
It has often been suggested that the decline in stegosaur diversity was part of a Jurassic-Cretaceous transition, where angiosperms become the dominant plants, causing a faunal turnover where new groups of herbivores evolved. Although in general the case for such a causal relation is poorly supported by the data, stegosaurians are an exception in that their decline coincides with that of the Cycadophyta.
Though Late Cretaceous stegosaurian fossils have been reported, these have mostly turned out to be misidentified. A well-known example is Dravidosaurus, known from Coniacian fossils found in India. Though originally thought to be stegosaurian, in 1991 these badly-eroded fossils were suggested to instead have been based on plesiosaurian pelvis and hindlimb material, and none of the fossils are demonstrably stegosaurian. The reinterpretation of Dravidosaurus as a plesiosaur wasn't accepted by Galton and Upchurch (2004), who stated that the skull and plates of Dravidosaurus are certainly not plesiosaurian, and noted the need to redescribe the fossil material of Dravidosaurus. A purported stegosaurian dermal plate was reported from the latest Cretaceous (Maastrichtian) Kallamedu Formation (southern India); however, Galton & Ayyasami (2017) interpreted the specimen as a bone of a sauropod dinosaur. Nevertheless, the authors considered the survival of stegosaurians into the Maastrichtian to be possible, noting the presence of the stegosaurian ichnotaxon Deltapodus in the Maastrichtian Lameta Formation (western India).
Paleobiology
Plate function
In an ontogenetic histological analysis of Stegosaurus plates and spikes, Hayashi et al. (2012) examined their structure and function through juveniles to old adults. They found that throughout the ontogeny, the dorsal osteoderms are composed of dense ossified collagen fibres in both the cortical and cancellous sections of the bone, suggesting that plates and spikes are formed from the direct mineralization of already existing fibrous networks in the skin. However, the many structural features, seen in the spikes and plates of old adults specimens, are acquired at different stages of development. Extensive vascular networks form in the plates during the change from juveniles to young adults and persist in old adults but spikes acquire a thick cortex with a large axial vascular channel only in old adults. Hayashi et al. argue that the formation of nourishing vascular networks in young adults supported the growth of large plates. This would have enhanced the size of the animal, which may have helped attract mates and deter rivals. Furthermore, the presence of the vascular networks in the plates of the young adult indicate a secondary use of the plates as a thermoregulatory device for heat loss much like the elephant ear, toucan bill or alligator osteoderms. The thickening of the cortical section of the bone and the compaction of bone in the terminal tail-spikes in old adults suggest that they were used as defence weapons, but not until an ontogenetically late stage. The development of the large axial channel in old adults from small canals in young adults, facilitated the further enlargement of the spikes by increasing the amount of nourishment supplied. On the other hand, plates do not show a similar degree of bone compaction or cortical thickening indicating they would not be capable of taking much weight from above. This suggests they were not as important as spikes in active defense.
The protective nature of dorsal plates has also been questioned in the past Davitashvili (1961) noted that narrow dorsal location of the plates still left the sides vulnerable. Since the pattern of plates and spines vary between species, he suggested it could be important for intraspecific recognition and as a display for sexual selection. This is corroborated by Spassov's (1982) observations that the plates are arranged for maximum visible effect when viewed laterally during non-aggressive agonistic behaviour, as opposed to from a head-on aggressive stance.
Trace fossils
Stegosaurian tracks were first recognized in 1996 from a hindprint-only trackway discovered at the Cleveland-Lloyd quarry, which is located near Price, Utah. Two years later, a new ichnogenus called Stegopodus was erected for another set of stegosaurian tracks which were found near Arches National Park, also in Utah. Unlike the first, this trackway preserved traces of the forefeet. Fossil remains indicate that stegosaurians have five digits on the forefeet and three weight-bearing digits on the hind feet. From this, scientists were able to predict the appearance of stegosaurian tracks in 1990, six years in advance of the first actual discovery of Morrison stegosaurian tracks. More trackways have been found since the erection of Stegopodus. None, however, have preserved traces of the front feet and stegosaurian traces remain rare.
Deltapodus is an ichnogenus attributed as stegosaurian prints, and are known across Europe, North Africa, and China. One Deltapodus footprint measures less than 6 cm in length and represents the smallest known stegosaurian track. Some tracks preserve exquisite scaly skin pattern.
Australia's 'Dinosaur Coast' in Broome, Western Australia includes tracks of several different thyreophoran track-makers. Of these, the ichnogenus Garbina (a Nyulnyulan word for 'shield') and Luluichnus (honours the late Paddy Roe, OAM who went by the name 'Lulu') have been considered registered by stegosaurs. Garbina includes the largest stegosaur tracks measuring 80 cm in length. Trackway data show Garbina track-makers were capable of bipedal and quadrupedal progression.
While has no body fossil evidence currently known for stegosaurs, handprints from underground coal mines near Oakey, Queensland, resembling Garbina tracks suggests their occurrence in this country from at least the Middle to Upper Jurassic (Callovian–Tithonian). A single plaster cast of one of these handprints is in the collections of the Queensland Museum.
Tail spikes
There has been debate about whether the spikes were used simply for display, as posited by Gilmore in 1914, or used as a weapon. Robert Bakker noted that it is likely that the stegosaur tail was much more flexible than those of other ornithischian dinosaurs because it lacked ossified tendons, thus lending credence to the idea of the tail as a weapon. He also observed that Stegosaurus could have maneuvered its rear easily by keeping its large hindlimbs stationary and pushing off with its very powerfully muscled but short forelimbs, allowing it to swivel deftly to deal with attack. In 2010, analysis of a digitized model of Kentrosaurus aethiopicus showed that the tail could bring the thagomizer around to the sides of the dinosaur, possibly striking an attacker beside it.
In 2001, a study of tail spikes by McWhinney et al., showed a high incidence of trauma-related damage. This too supports the theory that the spikes were used in combat. There is also evidence for Stegosaurus defending itself, in the form of an Allosaurus tail vertebra with a partially healed puncture wound that fits a Stegosaurus tail spike. Stegosaurus stenops had four dermal spikes, each about long. Discoveries of articulated stegosaur armor show that, at least in some species, these spikes protruded horizontally from the tail, not vertically as is often depicted. Initially, Marsh described S. armatus as having eight spikes in its tail, unlike S. stenops. However, recent research re-examined this and concluded this species also had four.
Posture
A digital articulation and manipulation of digital scans of specimen material of Kentrosaurus inferred that stegosaurids may have used an erect limb posture, like that of most mammals, for habitual locomotion while using a sprawled crocodilian pose for defensive behavior. The sprawled pose would allow them to tolerate the large lateral forces used in swinging the spiked tail against predators as a clubbing device.
Sexual dimorphism
There have been several findings of possible sexual dimorphism in stegosaurids. Saitta (2015) presents evidence of two morphs of Hesperosaurus dorsal plates, with one morph having a wide, oval plate with a surface area 45% larger than the narrow, tall morph. Considering that dorsal plates most likely functioned as display structures and that the wide oval shape allowed a broad continuous display, Saitta assigns the wider morph with larger surface area as male.
Kevin Padian, a paleontologist at the University of California, Berkeley, remarked that Saitta had misidentified features in his specimen's bone tissue sections and said "there's no evidence the animal has stopped growing". Paidan also expressed ethical concerns about the use of private specimens in the study.
Kentrosaurus, Dacentrurus and Stegosaurus are also suggested to have exhibited dimorphism in the form of three extra sacral ribs in the females.
Feeding
In order to explore the feeding habits of stegosaurids, Reichel (2010) created a 3-D model of Stegosaurus teeth using the software ZBrush. The model finds that the bite forces of Stegosaurus was significantly weaker than that of Labradors, wolves and humans. The finding suggests that these dinosaurs would be capable of breaking smaller branches and leaves with their teeth, but would not be able to bite through a thick object (12 mm or more in diameter). Parrish et al.'s (2004) description of Jurassic flora in the stegosaurid-rich Morrison Formation supports this finding. The flora during this time-period was dominated by seasonal small, fast-growing herbaceous plants, which stegosaurids could consume easily if Reichel's reconstruction is accurate.
Mallison (2010) suggested that Kentrosaurus may have used a tripodal stance on their hindlimbs and tail to double the foraging height from the general low browsing height under one metre for stegosaurids. This challenged the view that stegosaurs are primarily low vegetation feeders because of their small heads, short necks and short forelimbs, since the tripodal stance would also give them access to young trees and high bushes.
Another piece of evidence suggesting that some stegosaurids may have consumed more than just low vegetation was the discovery of the long-necked stegosaurid Miragaia longicollum. This dinosaur's neck has at least 17 cervical vertebrae achieved through the transformation of thoracic vertebrae into cervical vertebrae and possible lengthening of the centrum. This is more than most sauropod dinosaurs, which also achieved the elongation of the neck through similar mechanisms and had access to fodder higher off the ground.
| Biology and health sciences | Ornitischians | Animals |
9117691 | https://en.wikipedia.org/wiki/Cotylorhynchus | Cotylorhynchus | Cotylorhynchus is an extinct genus of herbivorous caseid synapsids that lived during the late Lower Permian (Kungurian) and possibly the early Middle Permian (Roadian) in what is now Texas and Oklahoma. The large number of specimens found make it the best-known caseid. Like all large herbivorous caseids, Cotylorhynchus had a short snout sloping forward and very large external nares. The head was very small compared to the size of the body. The latter was massive, barrel-shaped, and ended with a long tail. The limbs were short and robust. The hands and feet had short, broad fingers with powerful claws. The barrel-shaped body must have housed large intestines, suggesting that the animal had to feed on a large quantity of plants of low nutritional value. Caseids are generally considered to be terrestrial, though a semi-aquatic lifestyle has been proposed by some authors. The genus Cotylorhynchus is represented by three species, the largest of which could reach more than 6 m in length. However, a study published in 2022 suggests that the genus may be paraphyletic, with two of the three species possibly belonging to separate genera.
Discovery
The genus name Cotylorhynchus comes from the Greek kotyle, cup, hollow, and rhynchos, beak, or snout. The genus was named so because of the nasal opening which is surrounded by a depressed, cup-shaped bony surface.
The genus Cotylorhynchus contains three species which differ in size and proportion, C. romeri (the type species), C. hancocki, and C. bransoni. In C. romeri there are two size groups which presumably represent sexual dimorphism. There is no size overlap between adults of C. romeri and C. hancocki, but larger specimens of C. bransoni have roughly the same dimensions as smaller specimens of C. romeri. In 2022, Werneburg and colleagues suggested that the species C. hancocki and C. bransoni might not belong to the genus Cotylorhynchus. These authors consider that a detailed revision of these two taxa is necessary to clarify their status.
Description
The skull of Cotylorhynchus shows the typical caseid morphology with a forward sloping snout, very large nasal opening, a skull roof with numerous small depressions, and a very large pineal foramen. The latter is wider than long as in Ennatosaurus and thus differs from that of Euromycter which is subcircular. The number of teeth in the upper and lower jaws ranges from 16 to 20. In the upper jaw, the anterior teeth are long and slender, while those behind decrease in size posteriorly and are slightly spatulate. All the marginal teeth have their distal end slightly inclined towards the interior of the mouth and the top of their crown each have three small cuspules arranged longitudinally. These teeth also show an enlargement of the central part of the crown. In the lower jaw, the anterior teeth, not denticulate according to Olson, are shorter and tilt slightly forward. Other lower teeth are similar to those in the upper jaw.
The postcranial skeleton is massive. The ribs are very long, heavy and curved to form a bulbous body. Ribs are present on all the pre-sacral vertebrae and the first caudal vertebrae. The five posterior presacral ribs are fused with the transverse processes of the vertebrae. The sacrum contains three vertebrae. The neural spines of larger specimens become proportionately taller, especially in the pelvic region. The limbs are short and strong. The femur is characterized by its proximal end having a broad shelf marked by a margin slightly overhanging the dorsal surface of the femur. The pes and manus are broad and short, and terminate in strong, sharp, and curved ungual phalanges which must have supported powerful claws. Muscle and tendon scars are very developed.
Cotylorhynchus romeri
The type species Cotylorhynchus romeri is the best known species of the genus. It was erected in 1937 by J. Willis Stovall from the holotype OMNH 00637, consisting of the right side of a skull, an incomplete interclavicle, and the right and left manus, found in the red mudstones of the lower part of the Hennessey Formation, near the locality of Navina, Logan County, Oklahoma. The name of the species honors the American paleontologist Alfred Sherwood Romer. Shortly after the holotype's discovery, numerous specimens were found in some 20 sites surrounding the town of Norman, Cleveland County, also from the Hennessey Formation. Several fairly complete skeletons and many more fragmentary ones, totalize about 40 individuals. Specimens from the two regions are more or less contemporaneous and are only known within a thick stratigraphic interval. In Navina, the holotype comes from a level about above the base of the Hennessey Formation. The numerous specimens from the Norman area have been found in several layers located between above the base of the formation. The holotype of C. romeri has 20 teeth in upper jaws (3 on the premaxilla and 17 on the maxilla) and 19 teeth in lower jaws. C. romeri from the Norman region show a lower number of teeth. Four skulls where tooth counting was possible have 15 or 16 teeth in upper jaws. Some authors have thus considered that the holotype of C. romeri and the referred specimens from Norman represent two different species. However, the lack of specimens in the type locality (the holotype of C. romeri being the only known fossil there) and the number of teeth being the only difference with the Cotylorhynchus from Norman, it was decided to keep all these specimens in the same species.
C. romeri is a large species that can exceed in length and in weigh according to Romer and Price, or in length according to Stoval. Robert Reisz and colleagues have identified several cranial autapomorphies in this species. Cotylorhynchus romeri is distinguished by transversely broad postparietals that contact the supratemporals laterally, a large supratemporal that restricts contact between the parietal and postorbital, a stapes that has a short massive distal shaft and a ventral process that is braced against the quadrate ramus of the pterygoid, both vomers bearing three large teeth along the medial edge of the bone, the presence of teeth on the parasphenoid, and a surangular overlapping the posterodorsal tip of the dentary and excluding it from the coronoid eminence. However, Reisz and colleagues emphasize the fact that these autapomorphies are ambiguous because they are identified, with a few exceptions (a few bones of the palate), on parts of the skull still unknown in other species of the genus, thus limiting comparisons.
As the other two species of Cotylorhynchus, the dentition consists of tricuspid teeth (except for the most anterior teeth). However, C. romeri is the species where the cuspules are least developed. According to Olson, the premaxillary teeth had no cuspules. The latter, however, have been reported on the premaxillary teeth by Reisz and colleagues. All marginal teeth have their distal ends curved lingually. Numerous teeth are also present on several bones of the palate. A short row of three large, slightly recurved teeth are present on each vomer. They are taller than all other teeth on the palate. The palatines bear 10 subconical teeth located on a slightly thickened region of bone adjacent to the middle part of the suture shared with the pterygoid. The latter, triangular in shape, has many teeth divided into four distinct groups: a medial row bordering the interpterygoid vacuity, a group of smaller teeth which contributes to the pterygo-palatine tooth cluster, a posterolateral cluster of very small teeth on the transverse flange of the pterygoid, and behind this cluster a row of large teeth that borders the posterior margin of the transverse flange and extends medially to the basicranial region. In summary, the pterygoid bears more, smaller and slender teeth than those present on the pterygoid of C. bransoni. A few teeth are also present on the parasphenoid. Several palatal teeth have well-preserved tips showing the same distal morphology as marginal teeth with three small cuspules. In lower jaws, the dentary has between 16 and 19 teeth, which have the same morphology as the teeth of the upper jaws. In C. romeri, the dental row does not show spaces for replacement teeth which could be related to reduced rates of tooth replacement and increased longevity of functional teeth.
The vertebral column consists of 25 or 26 presacral vertebrae, 3 sacral vertebrae, and approximately 55 caudal vertebrae. C. romeri is distinguished by its widely spaced postzygapophyses on the dorsal vertebrae, while in C. hancocki and C. bransoni they usually contact along the midline. The relatively short limbs were more robust than those of C. bransoni but less massive than those of C. hancoki. The manus and the pes show a phalangeal formula of 2-2-3-3-2. The skeletons from the Norman region show two different size groups within adult specimens. One of these groups is composed of individuals about 20% smaller than those in the other group. This size difference was interpreted as possible specific differentiation or more likely as the expression of sexual dimorphism.
Cotylorhynchus hancocki
Cotylorhynchus hancocki was named in 1953 by Everett Claire Olson and James R. Beerbower, from a right humerus and a proximal end of a tibia (constituting the holotype FMNH UR 154) found in the upper part of the San Angelo Formation, near the Pease River, in Hardeman County, Texas. The species is named after J. Hancock, who made it possible to explore much of the locality of Pease River. Subsequently, more than sixty specimens, ranging from isolated bone to nearly complete skeleton, were discovered in several localities in Knox County, the majority however coming from the Kahn quarry. This site has yielded the most complete specimens of the species such as FMNH UR 581, an almost complete skeleton missing only the skull, some cervical vertebrae, a scapulocoracoid and some limb bones; FMNH UR 622, a partial skeleton including part of the skull and palate, various vertebrae, ribs, limb bones, clavicle, and bones of the foot; and FMNH UR 703, part of the skeleton of a very large individual including dorsal, lumbar, sacral, and caudal vertebrae, pelvis, femur, radius, ulna, and ribs. Other notable specimens include several isolated cranial bones. All of the skull bones known in this species come from the Kahn quarry.
With a size of up to in length and a weight of over , C. hancocki is by far the largest species of the genus, and is one of the largest known caseids along with the genus Alierasaurus. Its dimensions also make it one of the largest non-mammalian synapsids. No complete skull of C. hancocki is known. The various known elements (maxilla, dentaries, braincase, palate bones), indicate a skull similar to that of C. romeri but slightly larger. The upper teeth are not fully known. Several isolated mandibles show that the lower dentition had up to 18 slightly spatulate and tricuspid teeth. The cuspules of the upper teeth are weaker than those of the lower teeth. In addition, cuspules of C. hancocki are more pronounced than those of C. romeri, but less developed than those of C. bransoni.
The postcranial skeleton is distinguished by the morphology and proportions of limbs, vertebrae, and pelvis. The scapulocoracoid is characterized by the presence of a supraglenoid foramen on the scapular blade. Such a foramen is absent in the other two species of Cotylorhynchus and in caseids in general but is present in the genus Lalieudorhynchus. The scapula has a process-like bulged anteromedial margin as in Lalieudorhynchus. The humerus has a flat, very broad and thin epicondyle, and a completely closed entepicondylar foramen. The most complete vertebral column is that of specimen FMNH UR 581 in which there are seventeen presacral vertebrae and thirty-nine caudal vertebrae in articulation. A characteristic related to the very large size of this species is the presence of a prominent hyposphene on the postzygapophyses of the dorsal vertebrae, a character shared with Lalieudorhynchus. This supplementary intervertebral joint strengthened and stabilized the vertebral column to support the weight of the animal. The neural spine of the first caudal and sacral vertebra is very elongated dorsally as in Lalieudorhynchus. The limb bones are very strong. The femur in particular is very massive with a relatively short shaft and a very developed internal trochanter, another character shared with Lalieudorhynchus. The bone as a whole is proportionately shorter and wider than that of the other two species of Cotylorhynchus. The pelvis is characterized by a distinctly larger anterolateral projection of the pubis than in C. romeri, and a sacrum with a very large anterior sacral rib, while the second and third sacral ribs are small and less specialized. An incomplete foot is preserved in FMNH UR 581. The astragalus of C. hancocki differs from that of the other two species of Cotylorhynchus and resembles that of Lalieudorhynchus in being nearly as broad as long. The digit IV is complete and has three elements. The positions of the preserved elements of the digits II and III suggest a phalangeal formula of ? -2-2-3-?.
Cotylorhynchus bransoni
Cotylorhynchus bransoni was named in 1962 by Everett C. Olson and Herbert Barghusen from numerous bones found in the Omega Quarry in Kingfisher County, Oklahoma. Its remains were originally described as coming from the central part of the Flowerpot Formation. Olson later corrected this attribution by specifying that these remains belong to a tongue of the Chickasha Formation (El Reno Group) whose deposits interfinger in places with those of the Flowerpot, Blaine, and Dog Creek formations. The species name honors Dr. Carl C. Branson who, at the time of the species description, was the director of the Oklahoma Geological Survey, and who supported the paleontological research of the Chickasha Formation. The holotype FMNH UR 835, consists of the left side of the pelvis, a left femur, and several partial sacral ribs. Other specimens are represented by FMNH UR 836, a right tibia and fibula, tarsus bones, metatarsals, and phalanges except unguals; FMNH UR 837, a left radius and ulna, and part of the carpal bones; FMNH UR 838, a flattened left astragalus; FMNH UR 839, an immature left tibia; FMNH UR 840, a poorly preserved left fibula from an immature individual; FMNH UR 841, a fragment of the left maxilla with two teeth; FMNH UR 842, two fragments of ungual phalanges; and FMNH UR 843, an ungual phalanx. Further excavations in the Omega quarry have uncovered many additional bones, including several previously unknown skeletal elements. This additional material includes FMNH UR 905, a partial foot; FMNH UR 910, cervical ribs; FMNH UR 912, a clavicle; FMNH UR 913, a chevron; FMNH UR 915, a series of vertebrae; FMNH UR 918 and 919, two scapulo-coracoids; FMNH UR 923, sacral vertebrae; FMNH UR 929, a pterygoid; and FMNH UR 937, caudal vertebrae. Finally, three sites in the Hitchcock area of Blaine County provided specimens UR 972, caudal vertebrae; UR 982, 4 dorsal vertebrae; UR 983, dorsal vertebrae; UR 984, an incomplete humerus; and UR 988, part of the pelvis and a complete articulated foot still associated with part of the tibia and fibula.
C. bransoni is the smallest known species of the genus Cotylorhynchus, with its largest representatives comparable in size to the smallest individuals of C. romeri. The skull is poorly known and is only represented by two dentigerous bones: a fragment of a maxilla and a pterygoid. The teeth present on these elements distinguish C. bransoni from the other two species of the genus. The two tricuspid teeth preserved on the maxilla show more developed cuspules than those observed in C. romeri and C. hancocki. The pterygoid has fewer, larger and more robust teeth than those present in the pterygoid of C. romeri.
The scapulocoracoid has a proportionally narrower scapular blade than in the other two species. The glenoid cavity is somewhat longer in proportion to its width than in the other two species, and the anterior part of the coracoid plate is less extended anteriorly. The radius and ulna are relatively thin and short. The pelvis is characterized by the strong development of the ilium, which rises like a lamina above the acetabulum. The femur is gracile with a slender shaft and a fourth trochanter lying far down the shaft. The distal condyles are widely spaced. The astragalus is characterized by the presence of a very large foramen, a feature not present in the other two species. Olson and Barghusen thought that the phalangeal formula of the foot in C. bransoni was 2-2-2-3-2, a smaller formula than that of the two other species of Cotylorhynchus. However, Romano and Nicosia showed in 2015 that digit III had three phalanges and not two. Thus, the phalangeal formula of the foot of C. bransoni was 2-2-3-3-2 as in C. romeri and probably also in C. hancocki.
Phylogeny
All phylogenetics studies of caseids consider Cotylorhynchus to be a taxon close to the genera Ennatosaurus and Angelosaurus. In the first phylogenetic analysis of caseids published in 2008, the species Cotylorhynchus romeri is recovered as the sister group of Angelosaurus dolani.
Below is the first caseid cladogram published by Maddin et al. in 2008.
Another phylogenetic analysis performed in 2012 by Benson identifies Cotylorhynchus romeri as the sister group of the two species C. Hancocki and C. bransoni.
Below is the caseasaurs cladogram released by Benson in 2012.
In 2015, Romano and Nicosia published the first cladistic study including almost all caseids, except the very fragmentary taxa such as Alierasaurus ronchii and Angelosaurus greeni. In this analysis, the three species of Cotylorhynchus form a clade with the genus Ruthenosaurus, and this clade is the sister group of a clade containing the genera Angelosaurus and Ennatosaurus.
Below is the caseid cladogram published by Romano and Nicosia in 2015.
In 2020, two cladograms published by Berman and colleagues also recover Cotylorhynchus as one of the most derived caseids. In the first cladogram, the three species of Cotylorhynchus together with Angelosaurus and Alierasaurus form an unresolved polytomy. In the second cladogram, Cotylorhynchus hancocki and C. bransoni are sister taxa and form a polytomy with Cotylorhynchus romeri and Alierasaurus.
Below are the two caseids cladograms published by Berman and colleagues in 2020.
A phylogenetic analysis published in 2022 by Werneburg and colleagues suggests that the genus Cotylorhynchus would be paraphyletic. According to these authors, the species Cotylorhynchus hancocki and C. bransoni would not belong to this genus and would require a detailed revision to clarify their status, these taxa not having been studied since the 1960s. In this analysis, the type species C. romeri is positioned just above the genus Angelosaurus, and forms a polytomy with a clade containing Ruthenosaurus and Caseopsis and another clade containing Alierasaurus, the other two species of Cotylorhynchus, and Lalieudorhynchus. Within the latter clade, Alierasaurus is the sister group of “Cotylorhynchus” bransoni and a more derived clade including Lalieudorhynchus and “Cotylorhynchus” hancocki.
Below is the cladogram published by Werneburg and colleagues in 2022.
Paleobiology
Diet
The highly developed, barrel-shaped rib cage indicates the presence of a massive digestive system suitable for ingesting large amounts of low-nutrient plants. The dentition of Cotylorhynchus also shows that it was clearly herbivorous. The front teeth, longer and slightly curved, probably served to gather vegetation in the mouth. The tricuspid marginal teeth were well suited for slicing and cutting vegetation. The hyoid apparatus preserved in some caseids (Euromycter and Ennatosaurus), indicates the existence of a relatively mobile massive tongue which must have worked in concert with the palatal teeth during swallowing. The tongue had to press the plant pieces against the palate in order to puncture the food with the large palatal teeth, an action which may have served to enhance the cellulolytic fermentation of food in the intestine. The low number of cuspules (three) on the teeth of Cotylorhynchus indicates that this genus was adapted to a different fodder (or range of fodder) than other herbivorous caseids having a greater number of cuspules (Angelosaurus, Euromycter and Ennatosaurus having respectively 5, 5 to 8, and 5 to 7 cuspules).
Terrestrial vs semiaquatic lifestyle
Cotylorhynchus and caseids in general are usually considered primarily terrestrial animals. Everett C. Olson in particular considered that the degree of ossification of the skeleton, the relatively short feet and hands, the massive claws, the limbs with very powerful extensor muscles, and the strong sacrum, strongly suggested a terrestrial lifestyle. Olson did not rule out that caseids spent some time in water, but he considered locomotion on land to be an important aspect of their lifestyle. It has been suggested that the very powerful forelimbs, with strong and very tendinous extensor muscles, as well as very massive claws, could be used to dig up roots or tubers. However, the very short neck implied a low amplitude of vertical movements of the head which precluded the large species from feeding at ground level. Another hypothesis suggests that the caseids could have used their powerful forelimbs to fold large plants towards them, which they would have torn off with their powerful claws. Other hypotheses suggest that some caseids such as Cotylorhynchus used their limbs with powerful claws to defend themselves against predators, or during intraspecific activities linked in particular to reproduction. According to Olson, an interesting thing about this, is that almost all known specimens of the species Cotylorhynchus hancocki have one to ten ribs broken and healed during life. Finally, for some authors, the large derived caseids would have been semiaquatic animals that used their hands with large claws like paddles, which could also be used to manipulate the plants on which they fed.
Indeed, in 2016, Lambertz and colleagues questioned the terrestrial lifestyle of large caseids such as Cotylorhynchus. These authors showed that the bone microstructure of the humerus, femur, and ribs of adult and immature Cotylorhynchus specimens resembled that of aquatic animals rather than terrestrial animals, with a very spongy bone structure, with an extremely thin cortex, and the absence of distinct medullary cavities. This low bone density would have been a handicap for animals weighing several hundred kilograms, and with a strictly terrestrial lifestyle. Lambertz et al. also found that the joints between the vertebrae and the dorsal ribs only allowed small ranges of motion of the rib cage, thus limiting rib ventilation. To overcome this, they proposed that a proto-diaphragm was present to facilitate breathing, especially in aquatic environment. These authors also argued that the arid paleoclimates to which the caseid localities correspond are not incompatible with a semiaquatic lifestyle of these animals. These paleoenvironments included a significant number of aquatic habitats (rivers, lakes and lagoons). The arid conditions could have been the reason that the animals would sometimes have gathered and eventually died. In addition, arid environments have a low density of plants, which would require even more locomotor effort to find foods. For Lambertz et al., large caseids such as Cotylorhynchus were mainly aquatic animals that only came on dry land for the purposes of reproduction or thermoregulation.
This hypothesis is however disputed by Kenneth Angielczyk and Christian Kammerer as well as by Robert Reisz and colleagues based on paleontological and taphonomic data combined with the absence in these large caseids of morphological adaptations to an aquatic lifestyle. According to Angielczyk and Kammerer, the low bone density of caseids identified by Lambertz et al. does not resemble that of semiaquatic animals, which tend to have a more strongly ossified skeleton to provide passive buoyancy control and increased stability against current and wave action. Cotylorhynchus bone microstructure is more similar to what is seen in animals living in the open ocean, such as cetaceans and pinnipeds, which emphasize high maneuverability, rapid acceleration and hydrodynamic control of buoyancy. However, the caseid morphology was totally incompatible with a pelagic lifestyle. Thus, due to these unusual data, Angielczyk and Kammerer consider that the available evidence is still insufficient to question the more widely assumed terrestrial lifestyle of caseids. According to Reisz and colleagues the presence of numerous skeletons of the amphibian Brachydectes preserved in estivation and of the lungfish Gnathorhiza, another well-known aestivator, combined with the absence of obligate aquatic vertebrates strongly suggests that the Hennessey fauna lived in a dry habitat periodically punctuated by monsoons. Combined with the fact that Cotylorhynchus shows no morphological adaptations to an aquatic lifestyle, these authors consider it as a terrestrial animal that had to endure monsoon rains, with some individuals occasionally succumbing to major floods.
In 2022, Werneburg and colleagues proposed a somewhat different semiaquatic lifestyle, in which large caseids like Lalieudorhynchus (whose bone texture is even more osteoporotic than that of Cotylorhynchus) would be ecological equivalents of modern hippos, passing part of their time in the water (being underwater walkers rather than swimming animals) but coming on land for food.
Stratigraphic distribution
No radiometric dating is available for the geological formations containing Cotylorhynchus fossils. The oldest species is C. romeri from the Hennessey Formation in Oklahoma. This formation is considered contemporary with the upper part of the Clear Fork Group (Choza Formation) of Texas. Ammonoid faunas found in marine strata present at the base and top of the Clear Fork Group indicate that the three formations that compose it (Arroyo, Vale, and Choza) are entirely included in the Kungurian.
The other two species of Cotylorhynchus are younger and come from the San Angelo and Chickasha formations. The estimation of the geological age of these two formations has been the subject of many interpretations, these alternatively assigning them a late Cisuralian (Kungurian) and/or basal Guadalupian (Roadian) age.
In Texas, the species Cotylorhynchus hancocki comes from the San Angelo Formation. This formation overlies the Clear Fork Group and is overlain by the Blaine Formation. According to Spencer G. Lucas and colleagues, fusulins found in a marine intercalation of the San Angelo Formation, as well as ammonoids present at the base of the overlying Blaine Formation, indicated a Kungurian age. Moreover, according to these authors, the base of the San Andres Formation, located further west and considered a lateral equivalent of the Blaine Formation, is in the Neostreptognathodus prayi conodont zone, the second of the three Kungurian conodont biozones. The base of the Blaine Formation would therefore belong to this Kungurian biozone, suggesting that the underlying San Angelo Formation and C. hancocki would be slightly older than the N. prayi conodont zone with a lower Kungurian age. However, Michel Laurin and Robert W. Hook argued that the fusuline marine intercalation cited above does not belong to the San Angelo Formation in which it was mistakenly included, and cannot be used to date the latter. The name San Angelo Formation has been incorrectly applied to a wide variety of rocks in various sedimentary basins located in western Texas, whereas the San Angelo Formation is restricted to the eastern shelf and is exclusively continental and devoid of marine fossils. On the other hand, the taxonomic revision of the ammonoids from the base of the Blaine Formation indicates a Roadian age rather than a Kungurian age and the San Angelo formation yielded a fossil flora dominated by voltzian conifers, an assemblage rather characteristic of the Guadalupian and the Lopingian. Thus, according to Laurin and Hook, the San Angelo Formation could date from latest Kungurian or earliest Roadian, or more likely could straddle the Kungurian/Roadian boundary.
Cotylorhynchus bransoni is the youngest species of the genus and comes from the Chickasha Formation in Oklahoma. This formation was long considered contemporary with the San Angelo Formation. However, Laurin and Hook demonstrated that the Chickasha Formation is slightly younger because it is intercalated within the central part of the Flowerpot Formation, which overlies the Duncan Sandstone Formation, the latter being in fact the lateral equivalent of the San Angelo Formation in Oklahoma. Magnetostratigraphic data suggest that the Chickasha Formation probably dates from the early Roadian. A Roadian age was also suggested based on the presence in the Chickasha fauna of the nycteroleterid parareptile Macroleter, a genus that was only known from the Middle Permian of European Russia. However, Sigi Maho and colleagues have pointed out that several genera of Permian tetrapods had a wide temporal distribution, such as Dimetrodon and Diplocaulus, and that the presence of the genus Macroleter in both Russia and Oklahoma (represented by two different species) is not an evidence of a middle Permian age for the Chickasha Formation. The same authors also point to the example of the varanopid Mesenosaurus, which is present both in the Middle Permian of European Russia and by a separate species in Oklahoma, in a locality radiometrically dated to the early Permian (Artinskian). Additionally, probable nycteroleterid footprints, named Pachypes ollieri, from Cisuralian rocks of Europe and North America and from the Guadalupian of Europe, show that the stratigraphic distribution of Nycteroleteridae was not restricted to the middle and late Permian but also included the early Permian. Cisuralian occurrences of P. ollieri come from the Hermit (Arizona), Rabéjac (France) and Peranera (Spain) formations, all of Artinskian age, and also from the San Angelo Formation. Thus, in the current state of knowledge, the age of the Chickasha Formation can hardly be assessed from its fauna. However, the stratigraphic position of the Chickasha Formation compared to that of the San Angelo Formation, and its probable early Roadian age inferred by magnetostratigraphy, indicate that the Chickasha fauna represents the most recent Permian faunal assemblage of North America.
Paleoenvironments
In the Permian, most of the landmasses were united in a single supercontinent, Pangea. It was then roughly C-shaped: its northern (Laurasia) and southern (Gondwana) parts were connected to the west but separated to the east by the Tethys Ocean. A long string of microcontinents, grouped under the name of Cimmeria, divided the Tethys in two : the Paleo-Tethys in the north, and the Neo-Tethys in the south. The Hennessey, San Angelo, and Chickasha formations correspond mainly to fluvial and aeolian sediments deposited in a vast deltaic plain dotted with lakes and lagoons. This coastal plain was bordered to the west by a sea that occupied what is today the Gulf of Mexico and the southernmost part of North America. The rivers ending in the delta came from modest reliefs located further east and corresponding to the ancestral uplifts of the Ouachita, Arbuckle and Wichita mountains. The climate was subtropical with moderate and seasonal rains. There was a summer monsoon as well as a dry winter season. The monsoon was relatively weak, due to the limited size of the sea and the small differential between summer and winter temperatures. The presence of evaporites indicates significant aridity interrupted by seasonal flooding.
Hennessey Formation
Everett C. Olson thought that the Hennessey Formation was represented by several sedimentary facies corresponding to several types of environments. According to him, part of the formation would have been deposited in a marine environment while other parts would represent coastal and continental deposits. The continental facies is mostly composed of red mudstones, locally accompanied by lenses and beds of sandstones and siltstones interpreted as fluvial and floodplain deposits. However, detailed facies analyses later revealed that these rocks were more likely of aeolian origin, corresponding to silts, clays, and sands deposited as loess and sometimes trapped in mud flat, shallow salt lakes or wadi-type ephemeral streams. The fossils of Cotylorhynchus romeri are only found in red mudstones. This species occurs partly in the form of almost complete skeletons but also in the form of dislocated skeletons and articulated segments of skeletons. Based on the position of the articulated skeletons, Stovall and colleagues estimated that the animals were probably stuck in marshes or swamps where they were buried. The dislocated or partially articulated skeletons also indicate that other specimens have undergone some transport prior to burial. According to Lambertz and colleagues, it is also possible that the animals became bogged down when the waterhole in which they lived dried up, in the hypothesis of a semiaquatic lifestyle in Cotylorhynchus.
Apart from C. romeri, other known vertebrates in the Hennessey Formation are the Captorhinidae Captorhinikos chozaensis and Rhodotheratus parvus, the lungfish Gnathorhiza, and the amphibians Diplocaulus, Brachydectes, Rhynchonkos, Aletrimyti, and Dvellacanus. Gnathorhiza and Brachydectes were able to aestivate in burrows during prolonged periods of aridity. Rare vertebrate tracks have been attributed to the ichnogenera Amphisauropus and Dromopus, considered to be seymouriamorph amphibian and araeoscelid reptile footprints respectively. Amphisauropus tracks from the Hennessey Formation have however been reclassified in the ichnogenus Hyloidichnus, which corresponds to footprints of captorhinid eureptiles.
San Angelo Formation
The San Angelo Formation is composed at its base of unfossiliferous hard, green, gray and brown sandstones and fine conglomerates. The central part of the formation consists mainly of red mudstones corresponding to clayey and silty mud deposited in coastal plains during periodic flooding episodes. These red mudstones are interspersed with a thin level of green sandstone, sandy shales, and evaporites. These correspond to a minor and ephemeral encroachment of estuaries, lagoons, and very shallow seas on the terrestrial part of the delta. The caseids Angelosaurus dolani and Caseoides sanangeloensis are present in the red mudstones of this part of the formation. The upper part of the San Angelo Formation is characterized by the preponderance of coarse sediments such as sandstones and conglomerates, but also including at its base sandy mudstones and at its top pure red mudstones. According to Olson, these sediments were deposited by wider and more powerful rivers than those of the central part of the formation. However, in Oklahoma, strata equivalent to the San Angelo Formation, which were also considered fluvio-deltaic and coastal deposits, have been reinterpreted as being of aeolian origin. This level is characterized by the absence of the genus Angelosaurus and by the abundance of Cotylorhynchus hanckoki. The latter is most often represented by a single individual in each locality, with the exception of the Kahn quarry. This site has yielded many specimens distributed in several stratigraphic levels. The richest level, consisting of green, sometimes brown, sandy mudstones has provided the remains of at least 15 individuals. Several are partially articulated while others are represented by isolated bones. After being transported to the site, some bones remained exposed on the surface for some time, as indicated by the presence, on some of them, of a thin silt layer very different from the rest of the matrix. Several bones indicate that some carcasses were partially devoured. The taphonomy of the site therefore indicates that the corpses of C. hancocki were transported during a flooding episode, deposited as the waters receded, subjected to the action of predators and scavengers, and then buried later may be during a new flood. A process that would have been repeated several times. Large masses of vegetation have also been transported and have been found in direct association with vertebrates. The fauna of the upper San Angelo Formation includes, among others, the caseid Caseopsis agilis and Angelosaurus greeni, the sphenacodontid Dimetrodon angelensis, the captorhinids Rothianiscus multidonta, and Kahneria seltina, and the tupilakosaurid dvinosaur Slaugenhopia. A few tetrapod tracks also indicate the presence of a nycteroleterid pareiasauromorpha (ichnotaxon Pachypes ollieri), a partial skeleton of which is known from slightly younger deposits of the Chickasha Formation. Unusual flora has been found in the channels of the upper San Angelo formation. It is dominated by gymnosperms and is remarkable for its unique composition including both typical Lower Permian taxa such as Walchia or Culmitzschia but also forms that were previously known only in middle or late Permian rocks as various species of Ulmannia, Pseudovoltzia liebeana, and the taxon of uncertain affinity Taeniopteris eckardtii, or in Mesozoic strata such as the bennettitale Podozamites and the putative Cycadidae Dioonitocarpidium. The rest of the flora is represented by the ginkgoale Dicranophyllum, the cordaitale Cordaites, and the equisetale cf. Neocalamites.
Chickasha Formation
The Chickasha Formation corresponds to the central part of the Flowerpot Formation in which it is locally inserted. The sediments that compose it are varied and include red shales, sandstones, mudstones, conglomerates, and evaporites, deposited in floodplains and channels bordering the sea and coastal lagoons. In the Omega quarry, all the fossils come from sandstones, mudstones and hard, siliceous conglomerates, arranged in lenses. They correspond to channel deposits where the skeletons of Cotylorhynchus bransoni have accumulated, but also those of a second caseid, Angelosaurus romeri, and those of the captorhinid Rothianiscus robustus. Elsewhere in this formation are known the xenacanth Orthacanthus, the Nectridea Diplocaulus, the dissorophid temnospondyl Nooxobeia, the nycteroleterid Macroleter and the varanopids Varanodon and Watongia .
| Biology and health sciences | Proto-mammals | Animals |
9126890 | https://en.wikipedia.org/wiki/Open%20fracture | Open fracture | An open fracture, also called a compound fracture, is a type of bone fracture (broken bone) that has an open wound in the skin near the fractured bone. The skin wound is usually caused by the bone breaking through the surface of the skin. An open fracture can be life threatening or limb-threatening (person may be at risk of losing a limb) due to the risk of a deep infection and/or bleeding. Open fractures are often caused by high energy trauma such as road traffic accidents and are associated with a high degree of damage to the bone and nearby soft tissue. Other potential complications include nerve damage or impaired bone healing, including malunion or nonunion. The severity of open fractures can vary. For diagnosing and classifying open fractures, Gustilo-Anderson open fracture classification is the most commonly used method. This classification system can also be used to guide treatment, and to predict clinical outcomes. Advanced trauma life support is the first line of action in dealing with open fractures and to rule out other life-threatening condition in cases of trauma. The person is also administered antibiotics for at least 24 hours to reduce the risk of an infection.
Cephalosporins, sometimes with aminoglycosides, are generally the first line of antibiotics and are used usually for at least three days. Therapeutic irrigation, wound debridement, early wound closure and bone fixation core principles in management of open fractures. All these actions aimed to reduce the risk of infections and promote bone healing. The bone that is most commonly injured is the tibia and working-age young men are the group of people who are at highest risk of an open fracture. Older people with osteoporosis and soft-tissue problems are also at risk.
Epidemiology
Crush injuries are the most common form of injuries, followed by falls from standing height, and road traffic accidents. Open fractures tend to occur more often in males than females at the ratio of 7 to 3 and the age of onset of 40.8 and 56 years respectively. In terms of anatomy location, fractures of finger phalanges are the most common one at the rate of 14 per 100,000 people per year in the general population, followed by fracture of tibia at 3.4 per 100,000 population per year, and distal radius fracture at 2.4 per 100,000 population per year. Infection rates for Gustilo Grade I fractures is 1.4%, followed by 3.6% for Grade II fractures, 22.7% for Grade IIIA fractures, and 10 to 50% of Grade IIIB and IIIC fractures.
Signs and symptoms
There are a range of characteristics of open fractures as the severity of the injury can vary greatly. Most open fractures are characterized by a broken bone that is sticking out of the skin, but there can also be a broken bone that is associated with a very small "poke-hole" skin wound. Both of these injuries are classified as open fractures. Some open fractures can have significant blood loss. Most open fractures have extensive damage to soft tissues near and around the bone such as nerves, tendons, muscles, and blood vessels.
Causes
Open fractures can occur due to direct impacts such as high-energy physical forces (trauma), motor vehicular accidents, firearms, and falls from height. Indirect mechanisms include twisting (torsional injuries) and falling from a standing position. These mechanisms are usually associated with substantial degloving of the soft-tissues, but can also have a subtler appearance with a small poke hole and accumulation of clotted blood in the tissues. Depending on the nature of the trauma, it can cause different types of fractures:
Common fractures
Bone fractures result from significant trauma to the bone. This trauma can come from a variety of forces – a direct blow, axial loading, angular forces, torque, or a mixture of these. There are various fracture types, including closed, open, stress, simple, comminuted, greenstick, displaced, transverse, oblique.
Pathological fractures
Result from minor trauma to diseased bone. These preexisting processes include metastatic lesions, bone cysts, advanced osteoporosis, etc.
Fracture-dislocations
Severe injury in which both fracture and dislocation take place simultaneously.
Gunshot wounds
Caused by high-speed projectiles, they cause damage as they go through the tissue, through secondary shock wave and cavitation.
Diagnosis
The initial evaluation for open fractures is to rule out any other life-threatening injuries. Advanced Trauma Life Support (ATLS) is the initial protocol to rule out such injuries. Once the patient is stabilised, orthopedic injuries can be evaluated including determining the severity of injury using a classification system. Mechanism of injury is important to know the amount energy that is transferred to the patient and the level of contamination. Every limb should be exposed to evaluate any other hidden injuries. Characteristics of the wound should be noted in detail. Neurology and the vascular status of the affected limb are important to rule out any nerve or blood vessels injuries. High index of suspicion of compartment syndrome should be maintained for leg and forearm fractures.
Classification
There are a number of classification systems attempting to categorise open fractures such as Gustilo-Anderson open fracture classification, Tscherne classification, and Müller AO Classification of fractures. However, Gustilo-Anderson open fracture classification is the most commonly used classification system. Gustilo system grades the fracture according to energy of injury, soft tissue damage, level of contamination, and comminution of fractures. The higher the grade, the worse the outcome of the fracture.
However, Gustilo system is not without its limitations. The system has limited interobserver reliability at 50% to 60%. The size of injury on the skin surface does not necessarily reflect the extent of deep underlying soft tissue injury. Therefore, the true grading of Gustilo can only be done in operating theatre.
Management
Acute management
Urgent interventions, including therapeutic irrigation and wound debridement, are often necessary to clean the area of injury and minimize the risk of infection. Other risks of delayed intervention include long-term complications, such as deep infection, vascular compromise and complete limb loss. After wound irrigation, dry or wet gauze should be applied to the wound to prevent bacterial contamination. Taking photographs of the wound can help to reduce the need of multiple examinations by different doctors, which could be painful. Limb should be reduced and placed in a well-padded splint for immobilization of fractures. Pulses should be documented before and after reduction.
Wound cultures are positive in 22% of pre-debridement cultures and 60% of post-debridement cultures of infected cases. Therefore, pre-operative cultures no longer recommended. The value of post-operative cultures is unknown. Tetanus prophylaxis is routinely given to enhance immune response against Clostridium tetani. Anti-tetanus immunoglobulin is only indicated for those with highly contaminated wounds with uncertain vaccination history. Single intramuscular dose of 3000 to 5000 units of tetanus immunoglobulin is given to provide immediate immunity.
Another important clinical decision during acute management of open fractures involves the effort to avoid preventable amputations, where functional salvage of the limb is clearly desirable. Care must be taken to ensure this decision is not solely based on an injury severity tool score, but rather a decision made following a full discussion of options between doctors and the person, along with their family and care team.
Antibiotics
Administration of broad-spectrum intravenous antibiotics as soon as possible (within an hour ideally) is necessary to reduce the risk of infection. However, antibiotics may not provide necessary benefits in open finger fractures and low velocity firearms injury. First generation cephalosporin (cefazolin) is recommended as first line antibiotics for the treatment of open fractures. The antibiotic is useful against gram positive cocci and gram negative rods such as Escherichia coli, Proteus mirabilis, and Klebsiella pneumoniae. To extend the coverage of antibiotics against more bacteria in Type III Gustilo fractures, combination of first generation cephalosporin and aminoglycoside (gentamicin or tobramycin) or a third generation cephalosporin is recommended to cover against nosocomial gram negative bacilli such as Pseudomonas aeruginosa. Adding penicillin to cover for gas gangrene caused by anaerobic bacteria Clostridium perfringens is a controversial practice. Studies has shown that such practice may not be necessary as the standard antibiotic regimen is enough to cover for Clostridial infections. Antibiotic impregnated devices such as tobramycin impregnated Poly(methyl methacrylate) (PMMA) beads and antibiotic bone cement are helpful in reducing rates of infection. The use of absorbable carriers with implant coatings at the time of surgical fixation is also an effective means of delivering local antibiotics.
There has been no agreement on the optimal duration of antibiotics. Studies has shown that there is no additional benefits of risk of infection when giving antibiotics for one day, when compared to giving antibiotics for three days or five days. However, at present, there is only low to moderate evidence for this and more research is needed. Some authors recommended that antibiotics to be given for three doses for Gustilo Grade I fractures, for one day after wound closure in Grade II fractures, three days in Grade IIIA fractures, and three days after wound closure for Grade IIIB and IIIC.
Wound irrigation
There has been no agreement for the optimal solution for wound irrigation. Studies found out that there is no difference in infection rates by using normal saline or other various forms of water (distilled, boiled, or tap). There is also no difference in infection rates when using normal saline with castile soap compared with normal saline together with bacitracin in irrigating wounds. Studies also have shown that there is no difference in infection rates using low pressure pulse lavage (LPPL) when compared to high pressure pulse lavage (HPPL) in irrigating wounds. Optimal amount of fluid for irrigation also has not been established. It is recommended that the amount of irrigation solution to be determined by the severity of the fracture, with 3 litres for type I fractures, 6 litres for type II fractures, and 9 litres for type III fractures.
Wound debridement
The purpose of wound debridement is to remove all contaminated and non-viable tissues including skin, subcutaneous fat, muscles and bones. Viability of bones and soft tissues are determined by their capacity to bleed. Meanwhile, the viability of muscles is determined by colour, contractility, consistency, and their capacity to bleed. The optimal timing of performing wound debridement and closure is debated and dependent on the severity of the injury, resources and antibiotics available, and individual needs. Debridement time can vary from 6 to 72 hours, and closure time can be immediate (less than 72 hours) or delayed (72 hours to up to 3 months). There is no difference in infection rates for performing surgery within 6 hours of injury when compared to until 72 hours after injury. NICE guidelines suggest that the surgical debridement should be done immediately for open fracture that are highly contaminated or where there is a lot of bleeding (vascular compromise). For high-energy open fractures that are not highly contaminated NICE guidelines suggest surgical debridement within 12 hours of the accident, and for other open fractures within 24 hours.
Surgical management
Early fracture immobilisation and fixation helps to prevent further soft tissue injury and promotes wound and bone healing. This is especially important in the treatment of intraarticular fractures where early fixation allows early joint motion to prevent joint stiffness. Fracture management depends on the person's overall well-being, fracture pattern and location, and the extent of soft tissue injury. Both reamed and unreamed intramedullary nailing are accepted surgical treatments for open tibial fracture. Both techniques have similar rates of postoperative healing, postoperative infection, implant failure and compartment syndrome. Unreamed intramedullary nailing is advantageous because it has a lower incidence of superficial infection and malunion compared to external fixation. However, unreamed intramedullary nailing can result in high rates of hardware failure if a person's weight bearing after surgery is not closely controlled. Compared to external fixation, unreamed intramedullary nailing has similar rates of deep infection, delayed union and nonunion following surgery. For open tibial fractures in children, there is an increasing trend of using orthopedic cast rather than external fixation. Bone grafting is also helpful in fracture repair. However, internal fixation using plates and screws is not recommended as it increase the rate of infection. Amputation is a last resort intervention, and is determined by factors such as tissue viability and coverage, infection, and the extent of damage to the vascular system.
Wound management and closure
Early wound closure is recommended to reduce the rates hospital-acquired infection. For Grade I and II fractures, wound can be healed by secondary intention or through primary closure. There is conflicting evident to suggest the effectiveness of Negative-pressure wound therapy (vacuum dressing), with several sources citing a decreased risk in infection, and others suggesting no proven benefit.
Adjunct Treatments
A limited number of studies have assessed the efficacy of recombinant human bone morphogenetic protein-2 (rhBMP-2) on healing and infection risk. Results are encouraging, but no conclusive answers have been agreed upon to date.
Prophylactic bone grafting, typically performed after the wound has been closed for two weeks but within 12 weeks of injury, may help those treated with external fixation heal faster. Bone graft can be impregnated with antibiotics to theoretically decrease infection risk.
Complications
When a bone is broken and exposed to the outside environment, the probability of infection increases. Both the surrounding soft tissues can become infected, as well as the bone itself, which is called osteomyelitis. Additional complications include the broken bone ends not healing, called non-union, and the broken bone ends healing in an incorrect orientation, called malunion. Open fractures of long bones may cause subsequent damage to surrounding tissue resulting in compartment syndrome. Additionally there is potential for fat embolism development, both requiring acute intervention. Lastly, open fractures commonly occur in the setting of traumatic experiences, and the co-occurrence of these events may lead to chronic pain and mental health disorders. The setting or mechanism of injury of the open fracture can have an effect on the risk of infection, for example, external objects or dirt in the wound increase the risk of infection.
Outcomes
Infection
The infection rate of open fractures depend on characteristics of the injury, type and timing of treatment, and patient factors. Higher rates of infection are associated with a higher Gustilo classification, where the risk of infection with a grade III fracture are up to 19.2% and a grade I or II fracture can have a 7.2% risk of infection. Deep infection is more likely with increasing amounts of time between injury sustainment and antibiotic administration. There is an increased risk of infection in patients who smoke or have diabetes. The most common pathogen implicated in infected open fractures is Staphylococcus aureus.
History
In Ancient Egypt, physicians were diagnosing and treating open fractures. Treatment consisted of manual reduction, where the broken bone is made to be straight again with physical maneuvers, and then application of splints and topical ointments. Splints were constructed using linen and sticks or tree bark. A topical ointment consisting of honey, grease, and lint made from vegetable fiber were then applied daily to the open fracture. However, the Ancient Egyptians noted open fractures to have a poor prognosis, and fifth dynasty graves have been discovered containing people who had died from open fractures.
During the 19th century Crimean War, the use of plaster-of-paris for the stabilization of open and closed fractures was pioneered. It has been reported that the pioneering Russian surgeon who introduced the novel technique had been inspired by watching sculptors creating works of art.
Before the 1850s, surgeons usually amputated the limbs for those with open fractures, as it was associated with severe sepsis and gangrene which can be life-threatening. It was not until the latter half of the 19th century, when Joseph Lister adopted the aseptic technique in surgeries, that the rate of death from open fractures reduced from 50% to 9%.
| Biology and health sciences | Types | Health |
9127632 | https://en.wikipedia.org/wiki/Biology | Biology | Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are composed of at least one cell that processes hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms can regulate their own internal environments.
Biologists can study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and the evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them.
Life on Earth, which emerged over 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various life form, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment.
History
The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scientific study of plants. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz (781–869), Al-Dīnawarī (828–896), who wrote on botany, and Rhazes (865–925) who wrote on anatomy and physiology. Medicine was especially well studied by Islamic scholars working in Greek philosopher traditions, while natural history drew heavily on Aristotelian thought.
Biology began to quickly develop with Anton van Leeuwenhoek's dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, infusoria and the diversity of microscopic life. Investigations by Jan Swammerdam led to new interest in entomology and helped to develop techniques of microscopic dissection and staining. Advances in microscopy had a profound impact on biological thinking. In the early 19th century, biologists pointed to the central importance of the cell. In 1838, Schleiden and Schwann began promoting the now universal ideas that (1) the basic unit of organisms is the cell and (2) that individual cells have all the characteristics of life, although they opposed the idea that (3) all cells come from the division of other cells, continuing to support spontaneous generation. However, Robert Remak and Rudolf Virchow were able to reify the third tenet, and by the 1860s most biologists accepted all three tenets which consolidated into cell theory.
Meanwhile, taxonomy and classification became the focus of natural historians. Carl Linnaeus published a basic taxonomy for the natural world in 1735, and in the 1750s introduced scientific names for all his species. Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living forms as malleable—even suggesting the possibility of common descent.
Serious evolutionary thinking originated with the works of Jean-Baptiste Lamarck, who presented a coherent theory of evolution. The British naturalist Charles Darwin, combining the biogeographical approach of Humboldt, the uniformitarian geology of Lyell, Malthus's writings on population growth, and his own morphological expertise and extensive natural observations, forged a more successful evolutionary theory based on natural selection; similar reasoning and evidence led Alfred Russel Wallace to independently reach the same conclusions.
The basis for modern genetics began with the work of Gregor Mendel in 1865. This outlined the principles of biological inheritance. However, the significance of his work was not realized until the early 20th century when evolution became a unified theory as the modern synthesis reconciled Darwinian evolution with classical genetics. In the 1940s and early 1950s, a series of experiments by Alfred Hershey and Martha Chase pointed to DNA as the component of chromosomes that held the trait-carrying units that had become known as genes. A focus on new kinds of model organisms such as viruses and bacteria, along with the discovery of the double-helical structure of DNA by James Watson and Francis Crick in 1953, marked the transition to the era of molecular genetics. From the 1950s onwards, biology has been vastly extended in the molecular domain. The genetic code was cracked by Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg after DNA was understood to contain codons. The Human Genome Project was launched in 1990 to map the human genome.
Chemical basis
Atoms and molecules
All organisms are made up of chemical elements; oxygen, carbon, hydrogen, and nitrogen account for most (96%) of the mass of all organisms, with calcium, phosphorus, sulfur, sodium, chlorine, and magnesium constituting essentially all the remainder. Different elements can combine to form compounds such as water, which is fundamental to life. Biochemistry is the study of chemical processes within and relating to living organisms. Molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including molecular synthesis, modification, mechanisms, and interactions.
Water
Life arose from the Earth's first ocean, which formed some 3.8 billion years ago. Since then, water continues to be the most abundant molecule in every organism. Water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. Once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. In terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen (H) atoms to one oxygen (O) atom (H2O). Because the O–H bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. This polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. Surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. Water is also adhesive as it is able to adhere to the surface of any polar or charged non-water molecules. Water is denser as a liquid than it is as a solid (or ice). This unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. Water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. Thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. As a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. In pure water, the number of hydrogen ions balances (or equals) the number of hydroxyl ions, resulting in a pH that is neutral.
Organic compounds
Organic compounds are molecules that contain carbon bonded to another element such as hydrogen. With the exception of water, nearly all the molecules that make up each organism contain carbon. Carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. For example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide (), or a triple covalent bond such as in carbon monoxide (CO). Moreover, carbon can form very long chains of interconnecting carbon–carbon bonds such as octane or ring-like structures such as glucose.
The simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other elements such as oxygen (O), hydrogen (H), phosphorus (P), and sulfur (S), which can change the chemical behavior of that compound. Groups of atoms that contain these elements (O-, H-, P-, and S-) and are bonded to a central carbon atom or skeleton are called functional groups. There are six prominent functional groups that can be found in organisms: amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group.
In 1953, the Miller–Urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early Earth, thus suggesting that complex organic molecules could have arisen spontaneously in early Earth (see abiogenesis).
Macromolecules
Macromolecules are large molecules made up of smaller subunits or monomers. Monomers include sugars, amino acids, and nucleotides. Carbohydrates include monomers and polymers of sugars.
Lipids are the only class of macromolecules that are not made up of polymers. They include steroids, phospholipids, and fats, largely nonpolar and hydrophobic (water-repelling) substances.
Proteins are the most diverse of the macromolecules. They include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. The basic unit (or monomer) of a protein is an amino acid. Twenty amino acids are used in proteins.
Nucleic acids are polymers of nucleotides. Their function is to store, transmit, and express hereditary information.
Cells
Cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. Most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. There are generally two types of cells: eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. Prokaryotes are single-celled organisms such as bacteria, whereas eukaryotes can be single-celled or multicellular. In multicellular organisms, every cell in the organism's body is derived ultimately from a single cell in a fertilized egg.
Cell structure
Every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. A cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. Cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. Cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. Cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton.
Within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. In addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. These organelles include the cell nucleus, which contains most of the cell's DNA, or mitochondria, which generate adenosine triphosphate (ATP) to power cellular processes. Other organelles such as endoplasmic reticulum and Golgi apparatus play a role in the synthesis and packaging of proteins, respectively. Biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. Plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. Eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. In terms of their structural composition, the microtubules are made up of tubulin (e.g., α-tubulin and β-tubulin) whereas intermediate filaments are made up of fibrous proteins. Microfilaments are made up of actin molecules that interact with other strands of proteins.
Metabolism
All cells require energy to sustain cellular processes. Metabolism is the set of chemical reactions in an organism. The three main purposes of metabolism are: the conversion of food to energy to run cellular processes; the conversion of food/fuel to monomer building blocks; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, the breaking down of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy. The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy that will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly without being consumed by it—by reducing the amount of activation energy needed to convert reactants into products. Enzymes also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.
Cellular respiration
Cellular respiration is a set of metabolic reactions and processes that take place in cells to convert chemical energy from nutrients into adenosine triphosphate (ATP), and then release waste products. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, releasing energy. Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it clearly does not resemble one when it occurs in a cell because of the slow, controlled release of energy from the series of reactions.
Sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. Cellular respiration involving oxygen is called aerobic respiration, which has four stages: glycolysis, citric acid cycle (or Krebs cycle), electron transport chain, and oxidative phosphorylation. Glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of ATP being produced at the same time. Each pyruvate is then oxidized into acetyl-CoA by the pyruvate dehydrogenase complex, which also generates NADH and carbon dioxide. Acetyl-CoA enters the citric acid cycle, which takes places inside the mitochondrial matrix. At the end of the cycle, the total yield from 1 glucose (or 2 pyruvates) is 6 NADH, 2 FADH2, and 2 ATP molecules. Finally, the next stage is oxidative phosphorylation, which in eukaryotes, occurs in the mitochondrial cristae. Oxidative phosphorylation comprises the electron transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from NADH and FADH2 that is coupled to the pumping of protons (hydrogen ions) across the inner mitochondrial membrane (chemiosmosis), which generates a proton motive force. Energy from the proton motive force drives the enzyme ATP synthase to synthesize more ATPs by phosphorylating ADPs. The transfer of electrons terminates with molecular oxygen being the final electron acceptor.
If oxygen were not present, pyruvate would not be metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen.
Photosynthesis
Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism's metabolic activities via cellular respiration. This chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. In most cases, oxygen is released as a waste product. Most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the energy necessary for life on Earth.
Photosynthesis has four stages: Light absorption, electron transport, ATP synthesis, and carbon fixation. Light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. The absorbed light energy is used to remove electrons from a donor (water) to a primary electron acceptor, a quinone designated as Q. In the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of NADP+, which is reduced to NADPH, a process that takes place in a protein complex called photosystem I (PSI). The transport of electrons is coupled to the movement of protons (or hydrogen) from the stroma to the thylakoid membrane, which forms a pH gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. This is analogous to the proton-motive force generated across the inner mitochondrial membrane in aerobic respiration.
During the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the ATP synthase is coupled to the synthesis of ATP by that same ATP synthase. The NADPH and ATPs generated by the light-dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate (RuBP) in a sequence of light-independent (or dark) reactions called the Calvin cycle.
Cell signaling
Cell signaling (or communication) is the ability of cells to receive, process, and transmit signals with its environment and with itself. Signals can be non-chemical such as light, electrical impulses, and heat, or chemical signals (or ligands) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. There are generally four types of chemical signals: autocrine, paracrine, juxtacrine, and hormones. In autocrine signaling, the ligand affects the same cell that releases it. Tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self-division. In paracrine signaling, the ligand diffuses to nearby cells and affects them. For example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle cell. In juxtacrine signaling, there is direct contact between the signaling and responding cells. Finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. Once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. For instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. Other types of receptors include protein kinase receptors (e.g., receptor for the hormone insulin) and G protein-coupled receptors. Activation of G protein-coupled receptors can initiate second messenger cascades. The process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction.
Cell cycle
The cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. These events include the duplication of its DNA and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. In eukaryotes (i.e., animal, plant, fungal, and protist cells), there are two distinct types of cell division: mitosis and meiosis. Mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA is replicated) and is often followed by telophase and cytokinesis; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the mitotic phase of an animal cell cycle—the division of the mother cell into two genetically identical daughter cells. The cell cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. After cell division, each of the daughter cells begin the interphase of a new cycle. In contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of DNA replication followed by two divisions. Homologous chromosomes are separated in the first division (meiosis I), and sister chromatids are separated in the second division (meiosis II). Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor.
Prokaryotes (i.e., archaea and bacteria) can also undergo cell division (or binary fission). Unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. Before binary fission, DNA in the bacterium is tightly coiled. After it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. Growth of a new cell wall begins to separate the bacterium (triggered by FtsZ polymerization and "Z-ring" formation). The new cell wall (septum) fully develops, resulting in the complete split of the bacterium. The new daughter cells have tightly coiled DNA rods, ribosomes, and plasmids.
Sexual reproduction and meiosis
Meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. Two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic DNA damage and genetic complementation which masks the expression of deleterious recessive mutations.
The beneficial effect of genetic complementation, derived from outcrossing (cross-fertilization) is also referred to as hybrid vigor or heterosis. Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted “The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented.” Genetic variation, often produced as a byproduct of sexual reproduction, may provide long-term advantages to those sexual lineages that engage in outcrossing.
Genetics
Inheritance
Genetics is the scientific study of inheritance. Mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. It has several principles. The first is that genetic characteristics, alleles, are discrete and have alternate forms (e.g., purple vs. white or tall vs. dwarf), each inherited from one of two parents. Based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive; an organism with at least one dominant allele will display the phenotype of that dominant allele. During gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. Heterozygotic individuals produce gametes with an equal frequency of two alleles. Finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i.e., genes are unlinked. An exception to this rule would include traits that are sex-linked. Test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. A Punnett square can be used to predict the results of a test cross. The chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by Thomas Morgans's experiments with fruit flies, which established the sex linkage between eye color and sex in these insects.
Genes and DNA
A gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid (DNA) that carries genetic information that controls form or function of an organism. DNA is composed of two polynucleotide chains that coil around each other to form a double helix. It is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell is collectively known as its genome. In eukaryotes, DNA is mainly in the cell nucleus. In prokaryotes, the DNA is held within the nucleoid. The genetic information is held within genes, and the complete assemblage in an organism is called its genotype.
DNA replication is a semiconservative process whereby each strand serves as a template for a new strand of DNA. Mutations are heritable changes in DNA. They can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical (e.g., nitrous acid, benzopyrene) or radiation (e.g., x-ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes). Mutations can lead to phenotypic effects such as loss-of-function, gain-of-function, and conditional mutations.
Some mutations are beneficial, as they are a source of genetic variation for evolution. Others are harmful if they were to result in a loss of function of genes needed for survival.
Gene expression
Gene expression is the molecular process by which a genotype encoded in DNA gives rise to an observable phenotype in the proteins of an organism's body. This process is summarized by the central dogma of molecular biology, which was formulated by Francis Crick in 1958. According to the Central Dogma, genetic information flows from DNA to RNA to protein. There are two gene expression processes: transcription (DNA to RNA) and translation (RNA to protein).
Gene regulation
The regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, RNA splicing, translation, and post-translational modification of a protein. Gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the DNA sequence close to or at a promoter. A cluster of genes that share the same promoter is called an operon, found mainly in prokaryotes and some lower eukaryotes (e.g., Caenorhabditis elegans). In positive regulation of gene expression, the activator is the transcription factor that stimulates transcription when it binds to the sequence near or at the promoter. Negative regulation occurs when another transcription factor called a repressor binds to a DNA sequence called an operator, which is part of an operon, to prevent transcription. Repressors can be inhibited by compounds called inducers (e.g., allolactose), thereby allowing transcription to occur. Specific genes that can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. In contrast to both, structural genes encode proteins that are not involved in gene regulation. In addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of DNA and protein found in eukaryotic cells.
Genes, development, and evolution
Development is the process by which a multicellular organism (plant or animal) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. There are four key processes that underlie development: Determination, differentiation, morphogenesis, and growth. Determination sets the developmental fate of a cell, which becomes more restrictive during development. Differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. Stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. Cellular differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. Thus, different cells can have very different physical characteristics despite having the same genome. Morphogenesis, or the development of body form, is the result of spatial differences in gene expression. A small fraction of the genes in an organism's genome called the developmental-genetic toolkit control the development of that organism. These toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Among the most important toolkit genes are the Hox genes. Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva.
Evolution
Evolutionary processes
Evolution is a central organizing concept in biology. It is the change in heritable characteristics of populations over successive generations. In artificial selection, animals were selectively bred for specific traits.
Given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, Darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. Darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. He further inferred that this would lead to the accumulation of favorable traits over successive generations, thereby increasing the match between the organisms and their environment.
Speciation
A species is a group of organisms that mate with one another and speciation is the process by which one lineage splits into two lineages as a result of having evolved independently from each other. For speciation to occur, there has to be reproductive isolation. Reproductive isolation can result from incompatibilities between genes as described by Bateson–Dobzhansky–Muller model. Reproductive isolation also tends to increase with genetic divergence. Speciation can occur when there are physical barriers that divide an ancestral species, a process known as allopatric speciation.
Phylogeny
A phylogeny is an evolutionary history of a specific group of organisms or their genes. It can be represented using a phylogenetic tree, a diagram showing lines of descent among organisms or their genes. Each line drawn on the time axis of a tree represents a lineage of descendants of a particular species or population. When a lineage divides into two, it is represented as a fork or split on the phylogenetic tree. Phylogenetic trees are the basis for comparing and grouping different species. Different species that share a feature inherited from a common ancestor are described as having homologous features (or synapomorphy). Phylogeny provides the basis of biological classification. This classification system is rank-based, with the highest rank being the domain followed by kingdom, phylum, class, order, family, genus, and species. All organisms can be classified as belonging to one of three domains: Archaea (originally Archaebacteria), bacteria (originally eubacteria), or eukarya (includes the fungi, plant, and animal kingdoms).
History of life
The history of life on Earth traces how organisms have evolved from the earliest emergence of life to present day. Earth formed about 4.5 billion years ago and all life on Earth, both living and extinct, descended from a last universal common ancestor that lived about 3.5 billion years ago. Geologists have developed a geologic time scale that divides the history of the Earth into major divisions, starting with four eons (Hadean, Archean, Proterozoic, and Phanerozoic), the first three of which are collectively known as the Precambrian, which lasted approximately 4 billion years. Each eon can be divided into eras, with the Phanerozoic eon that began 539 million years ago being subdivided into Paleozoic, Mesozoic, and Cenozoic eras. These three eras together comprise eleven periods (Cambrian, Ordovician, Silurian, Devonian, Carboniferous, Permian, Triassic, Jurassic, Cretaceous, Tertiary, and Quaternary).
The similarities among all known present-day species indicate that they have diverged through the process of evolution from their common ancestor. Biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean eon and many of the major steps in early evolution are thought to have taken place in this environment. The earliest evidence of eukaryotes dates from 1.85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. Later, around 1.7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions.
Algae-like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2.7 billion years ago. Microorganisms are thought to have paved the way for the inception of land plants in the Ordovician period. Land plants were so successful that they are thought to have contributed to the Late Devonian extinction event.
Ediacara biota appear during the Ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the Cambrian explosion. During the Permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the Permian–Triassic extinction event 252 million years ago. During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates; one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods. After the Cretaceous–Paleogene extinction event 66 million years ago killed off the non-avian dinosaurs, mammals increased rapidly in size and diversity. Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.
Diversity
Bacteria and Archaea
Bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. Typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the Earth's crust. Bacteria also live in symbiotic and parasitic relationships with plants and animals. Most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory.
Archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), a term that has fallen out of use. Archaeal cells have unique properties separating them from the other two domains, Bacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of Haloquadratum walsbyi. Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. Archaea use more energy sources than eukaryotes: these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. Salt-tolerant archaea (the Haloarchaea) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fission, fragmentation, or budding; unlike bacteria, no known species of Archaea form endospores.
The first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet.
Archaea are a major part of Earth's life. They are part of the microbiota of all organisms. In the human microbiome, they are important in the gut, mouth, and on the skin. Their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles: carbon fixation; nitrogen cycling; organic compound turnover; and maintaining microbial symbiotic and syntrophic communities, for example.
Eukaryotes
Eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria (or symbiogenesis) that gave rise to mitochondria and chloroplasts, both of which are now part of modern-day eukaryotic cells. The major lineages of eukaryotes diversified in the Precambrian about 1.5 billion years ago and can be classified into eight major clades: alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. Five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. While it is likely that protists share a common ancestor (the last eukaryotic common ancestor), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. Like groupings such as algae, invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. Most protists are unicellular; these are called microbial eukaryotes.
Plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom Plantae, which would exclude fungi and some algae. Plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. The first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. Algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of Plantae. Unlike glaucophytes, the other algal clades such as red and green algae are multicellular. Green algae comprise three major clades: chlorophytes, coleochaetophytes, and stoneworts.
Fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. Many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems.
Animals are multicellular eukaryotes. With few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million animal species in total. They have complex interactions with each other and their environments, forming intricate food webs.
Viruses
Viruses are submicroscopic infectious agents that replicate inside the cells of organisms. Viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. More than 6,000 virus species have been described in detail. Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity.
The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. Because viruses possess some but not all characteristics of life, they have been described as "organisms at the edge of life", and as self-replicators.
Ecology
Ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment.
Ecosystems
The community of living (biotic) organisms in conjunction with the nonliving (abiotic) components (e.g., water, light, radiation, temperature, humidity, atmosphere, acidity, and soil) of their environment is called an ecosystem. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy from the sun enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals move matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and other microbes.
Populations
A population is the group of organisms of the same species that occupies an area and reproduce from generation to generation. Population size can be estimated by multiplying population density by the area or volume. The carrying capacity of an environment is the maximum population size of a species that can be sustained by that specific environment, given the food, habitat, water, and other resources that are available. The carrying capacity of a population can be affected by changing environmental conditions such as changes in the availability of resources and the cost of maintaining them. In human populations, new technologies such as the Green revolution have helped increase the Earth's carrying capacity for humans over time, which has stymied the attempted predictions of impending population decline, the most famous of which was by Thomas Malthus in the 18th century.
Communities
A community is a group of populations of species occupying the same geographical area at the same time. A biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, like pollination and predation, or long-term; both often strongly influence the evolution of the species involved. A long-term interaction is called a symbiosis. Symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. Every species participates as a consumer, resource, or both in consumer–resource interactions, which form the core of food chains or food webs. There are different trophic levels within any food web, with the lowest level being the primary producers (or autotrophs) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. At the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. Heterotrophs that consume plants are primary consumers (or herbivores) whereas heterotrophs that consume herbivores are secondary consumers (or carnivores). And those that eat secondary consumers are tertiary consumers and so on. Omnivorous heterotrophs are able to consume at multiple levels. Finally, there are decomposers that feed on the waste products or dead bodies of organisms.
On average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one-tenth of the energy of the trophic level that it consumes. Waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level.
Biosphere
In the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. For example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. A biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic (biosphere) and the abiotic (lithosphere, atmosphere, and hydrosphere) compartments of Earth. There are biogeochemical cycles for nitrogen, carbon, and water.
Conservation
Conservation biology is the study of the conservation of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. It is concerned with factors that influence the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years, which has contributed to poverty, starvation, and will reset the course of evolution on this planet. Biodiversity affects the functioning of ecosystems, which provide a variety of services upon which people depend. Conservation biologists research and educate on the trends of biodiversity loss, species extinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Organizations and citizens are responding to the current biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales.
| Biology and health sciences | Science and medicine | null |
5819718 | https://en.wikipedia.org/wiki/Portia%20labiata | Portia labiata | Portia labiata is a jumping spider (family Salticidae) found in Sri Lanka, India, southern China, Burma (Myanmar), Malaysia, Singapore, Java, Sumatra and the Philippines. In this medium-sized jumping spider, the front part is orange-brown and the back part is brownish. The conspicuous main eyes provide vision more acute than a cat's during the day and 10 times more acute than a dragonfly's, and this is essential in P. labiata′s navigation, hunting and mating.
The genus Portia has been called "eight-legged cats", as their hunting tactics are as versatile and adaptable as a lion's. All members of Portia have instinctive hunting tactics for their most common prey, but often can improvise by trial and error against unfamiliar prey or in unfamiliar situations, and then remember the new approach. While most jumping spiders prey mainly on insects and by active hunting, females of Portia also build webs to catch prey directly and sometimes join their own webs on to those of web-based spiders. Both females and males prefer web spiders as prey, followed by other jumping spiders, and finally insects. In all cases females are more effective predators than males.
Populations from Los Baños and from Sagada, both in the Philippines, have slightly different hunting tactics. In laboratory tests, Los Baños P. labiata relies more on trial and error than Sagada P. labiata in finding ways to vibrate the prey's web and thus lure or distract the prey. Around Los Baños the web-building Scytodes pallida, which preys on jumping spiders, is very abundant, and spits a sticky gum on prey and potential threats. A P. labiata from Los Baños instinctively detours round the back of S. pallida while with plucking the web in a way that makes the prey believe the threat is in front of it. In areas where S. pallida is absent, the local members of P. labiata do not use this combination of deception and detouring for a stab in the back. In a test to explore P. labiata′s ability to solve a novel problem, a miniature lagoon was set up, and the spiders had to find the best way to cross it. Specimens from Sagada, in the mountains, almost always repeated the first option they tried, even when that was unsuccessful. When specimens from Los Baños, beside a lake, were unsuccessful the first time, about three quarters switched to another option.
Adults of P. labiata sometimes uses "propulsive displays", in which an individual threatens a rival of the same sex, and unreceptive females also threaten males in this way. P. labiata females are extremely aggressive to other females, trying to invade and take over each other's webs, which often results in cannibalism. A test showed that they minimise the risk of confrontations by using silk draglines as territory marks. Another test showed that females can recognise the draglines of the most powerful fighters and prefer to move near the draglines of less powerful ones. Females try to kill and eat their mates during or after copulation, while males use tactics to survive copulation, but sometimes females outwit them. Before being mature enough to mate, juvenile females mimic adult females to attract males as prey. When hunting, P. labiata mature females emit olfactory signals that reduce the risk that any other females, males or juveniles of the same species may contend for the same prey.
Body structure and appearance
As in most species of the genus, the bodies of Portia labiata females are 7 to 10 millimetres long and their carapaces are 2.8 to 3.8 millimetres long. Males' bodies are 5 to 7.5 millimetres long, with carapaces 2.4 to 3.3 millimetres long. The carapaces of females are orange-brown, slightly lighter around the eyes, where there are sooty streaks and sometimes a violet to green sheen in certain lights. There is a broad white moustache along the bottom of the carapace, and running back from each main eye is a ridge that looks like a horn. Females' chelicerae are dark orange-brown and decorated with sparse white hairs, which form bands near the carapaces. The abdomens of females are mottled brown and black, and bear hairs of gold, white and black, and there are tufts consisting of brown hairs tipped with white. The carapaces of males are orange-brown, slightly lighter around the eyes, and have brown-black hairs lying on the surface but with a white wedge-shape stripe from the highest point down to the back, and white bands just above the legs. Males' chelicerae are also orange-brown with brown-black markings. The abdomens of males are brown with lighter markings and with brown-black hairs lying on the surface, and a short band of white hairs. The legs of both sexes are dark brown, with light markings in the femora (the sections of the legs nearest the body). All species of the genus Portia have elastic abdomens, so that those of both sexes can become almost spherical when well fed, and females' can stretch as much when producing eggs.
Senses
Although other spiders can also jump, salticids including Portia fimbriata have significantly better vision than other spiders, and their main eyes are more acute in daylight than a cat's and 10 times more acute than a dragonfly's. Jumping spiders have eight eyes, the two large ones in the center-and-front position (the anterior-median eyes, also called "principal eyes") housed in tubes in the head and providing acute vision. The other six are secondary eyes, positioned along the sides of the carapace and acting mainly as movement detectors. In most jumping spiders, the middle pair of secondary eyes are very small and have no known function, but those of Portia species are relatively large, and function as well as those of the other secondary eyes. The main eyes focus accurately on an object at distances from approximately 2 centimetres to infinity, and in practice can see up to about 75 centimetres. Like all jumping spiders, P. labiata can take in only a small visual field at one time, as the most acute part of a main eye can see all of a circle up to 12 millimetres wide at 20 centimetres away, or up to 18 millimetres wide at 30 centimetres away. Jumping spider's main eyes can see from red to ultraviolet.
Generally the jumping spider subfamily Spartaeinae, which includes the genus Portia, cannot discriminate objects at such long distances as the members of subfamilies Salticinae or Lyssomaninae can. However, members of Portia have vision about as acute as the best of the jumping spiders, for example: the salticine Mogrus neglectus can distinguish prey and conspecifics up to 320 millimetres away (42 times its own body length), while P. fimbriata can distinguish these up to 280 millimetres (47 times its own body length). The main eyes of a Portia can also identify features of the scenery up to 85 times its own body length, which helps the spider to find detours.
However, a Portia takes a relatively long time to see objects, possibly because getting a good image out of such tiny eyes is a complex process and needs a lot of scanning. This makes a Portia vulnerable to much larger predators such as birds, frogs and mantises, which a Portia often cannot identify because of the other predator's size.
Spiders, like other arthropods, have sensors, often modified setae (bristles), for smell, taste, touch and vibration protruding through their cuticle ("skin"). Unlike insects, spiders and other chelicerates do not have antennae. A Portia can sense vibrations from surfaces, and use these for mating and for hunting other spiders in total darkness. It can use air- and surface "smells" to detect prey which it often meets, to identify members of the same species, to recognise familiar members, and to determine the sex of other member of the same species.
Hunting tactics
Tactics used by most jumping spiders and by most of genus Portia
Almost all jumping spiders are predators, mostly preying on insects, on other spiders, and on other arthropods. The most common procedure is sighting the prey, stalking, fastening a silk safety line to the surface, using the two pairs of back legs to jump on the victim, and finally biting the prey. Most jumping spiders walk throughout the day, so that they maximize their chances of a catch.
Members of the genus Portia have hunting tactics as versatile and adaptable as a lion's. All members of Portia have instinctive tactics for their most common prey, but can improvise by trial and error against unfamiliar prey or in unfamiliar situations, and then remember the new approach. They can also make detours to find the best attack angle against dangerous prey, even when the best detour takes a Portia out of visual contact with the prey, and sometimes the planned route leads to abseiling down a silk thread and biting the prey from behind. Such detours may take up to an hour, and a Portia usually picks the best route even if it needs to walk past an incorrect route. If a Portia makes a mistake while hunting another spider, it may itself be killed.
While most jumping spiders prey mainly on insects and by active hunting, females of Portia also build webs to catch prey directly. These capture webs are funnel-shaped and widest at the top and are about 4,000 cubic centimetres in volume. The web is initially built in about 2 hours, and then gradually made stronger. A Portia often joins her own web on to one of a web-based non-salticid spider. When not joined to another spiders', a P. labiata female's capture web may be suspended from rigid foundations such as boughs and rocks, or from pliant bases such as stems of shrubs.
A web spider's web is an extension of the web spider's senses, informing the spider of vibrations that signal the arrival of prey and predators. If the intruder is another web spider, these vibrations vary widely depending on the new web spider's species, sex and experience. A Portia can pluck another spider's web with a virtually unlimited range of signals, either to lure the prey out into the open or calming the prey by monotonously repeating the same signal while the Portia walks slowly close enough to bite it. Such tactics enable Portia species to take web spiders, such as Holocnemus pluchei, from 10% to 200% of their size, and they hunt in all types of webs. In contrast, other cursorial spiders generally have difficulty moving on webs, and web-building spiders find it difficult to move in webs unlike those they build: sticky webs adhere to cursorial spiders and to web-builders of non-sticky webs; builders of cribellate webs have difficulty with non-cribellate webs, and vice versa. Where the web is sparse, a Portia will use "rotary probing", in which it moves a free leg around until it meets a thread. When hunting in another spider's web, a Portia′s slow, choppy movement and the flaps on its legs make it resemble leaf detritus caught in the web and blown in a breeze. P. labiata and some other Portia species use breezes and other disturbances as "smokescreens" in which these predators can approach web spiders more quickly, and revert to a more cautious approach when the disturbance disappears. A few web spiders run far away when they sense the un-rhythmical gait of a Portia entering the web – a reaction Wilcox and Jackson call "Portia panic".
If a large insect is struggling in a web, Portia does not usually take the insect, but waits for up to a day until the insect stops struggling, even if the prey is thoroughly stuck. When an insect stuck in a web owned by P. labiata, P. schultzi or any regional variant of P. fimbriata, and next to a web spider's web, the web spider sometimes enters the Portia′s web, and the Portia pursues and catches the web spider.
When catching an insect outside a web, a Portia sometimes lunges and sometimes uses a "pick up", in which it moves its fangs slowly into contact with the prey. In some pick ups, Portia first slowly uses its forelegs to manipulate the prey before biting. P. labiata and P. schultzi also occasionally jump on an insect. However, Portia species are not very good at catching moving insects and often ignore them, while some other salticid genera, especially the quick, agile Brettus and Cyrba, perform well against small insects.
When a Portia stalks another jumping spider, the prey generally faces the Portia and then either runs away or displays as it does to another member of its own species.
The webs of spiders on which Portia species prey sometimes contain dead insects and other arthropods which are uneaten or partly eaten. P. labiata and some other Portia species such as P. fimbriata (in Queensland) and P. schultzi sometimes scavenge these corpses if the corpses are not obviously decayed.
A Portia typically takes 3 to 5 minutes to pursue prey, but some pursuits can take much longer, and in extreme cases close to 10 hours when pursuing a web-based spider.
All Portia species eat eggs of other spiders, including eggs of their own species and of other cursorial spiders, and can extract eggs from cases ranging from the flimsy ones of Pholcus to the tough papery ones of Philoponella. While only P. fimbriata (in Queensland) captures cursorial spiders in their nests, all Portia species steal eggs from empty nests of cursorial spiders.
The venom of Portia species is unusually powerful against spiders. When a Portia stabs a small to medium spider (up to the Portia′s weight), including another Portia, the prey usually runs away for about 100 to 200 millimetres, enters convulsions, becomes paralysed after 10 to 30 seconds, and continues convulsing for 10 seconds to 4 minutes. Portia slowly approaches the prey and takes it. Portia usually needs to inflict up to 15 stabbings to completely immobilise a larger spider(1.5 to 2 times to the Portia′s weight), and then Portia may wait about 20 to 200 millimetres away for 15 to 30 minutes from seizing the prey. Insects are usually not immobilised so quickly but continue to struggle, sometimes for several minutes. If Portia cannot make further contact, all types of prey usually recover, making sluggish movements several minutes after the stabbing but often starting normal movement only after an hour.
Spiders have a narrow gut that can only cope with liquid food, and have two sets of filters to keep solids out. Some spiders pump digestive enzymes from the midgut into the prey and then suck the liquified tissues of the prey into the gut, eventually leaving behind the empty husk of the prey. Others grind the prey to pulp using the fangs and the bases of the pedipalps, while flooding it with enzymes; in these species the fangs and the bases of the pedipalps form a preoral cavity that holds the food they are processing.
Occasionally a Portia is killed or injured while pursuing prey up to twice Portia′s size. P. labiata is killed in 2.1% of pursuits and injured but not killed in 3.9%, P. schultzi is killed in 1.7% and injured but not killed in 5.3%, and P. fimbriata in Queensland is killed in 0.06% of its pursuits and injured but not killed in another 0.06%. A Portia′s especially tough skin often prevents injury, even when its body is caught in the other spider's fangs. When injured, Portia bleeds and may lose one or more legs. Spiders' palps and legs break off easily when attacked, 'the palps and legs of Portia species break off exceptionally easily, which may be a defence mechanism, and they are often seen with missing legs or palps, while other salticids in the same habitat are not seen with missing legs or palps.
Tactics used by Portia labiata
All performance statistics summarise result of tests in a laboratory, using captive specimens. The following table shows the hunting performance of adult females. In addition to P. labiata, the table shows for comparison the hunting performances of P. africana, P. schultzi and three regional variants of P. fimbriata.
| Biology and health sciences | Spiders | Animals |
7592567 | https://en.wikipedia.org/wiki/Introduction%20to%20entropy | Introduction to entropy | In thermodynamics, entropy is a numerical quantity that shows that many physical processes can go in only one direction in time. For example, cream and coffee can be mixed together, but cannot be "unmixed"; a piece of wood can be burned, but cannot be "unburned". The word 'entropy' has entered popular usage to refer to a lack of order or predictability, or of a gradual decline into disorder. A more physical interpretation of thermodynamic entropy refers to spread of energy or matter, or to extent and diversity of microscopic motion.
If a movie that shows coffee being mixed or wood being burned is played in reverse, it would depict processes highly improbable in reality. Mixing coffee and burning wood are "irreversible". Irreversibility is described by a law of nature known as the second law of thermodynamics, which states that in an isolated system (a system not connected to any other system) which is undergoing change, entropy increases over time.
Entropy does not increase indefinitely. A body of matter and radiation eventually will reach an unchanging state, with no detectable flows, and is then said to be in a state of thermodynamic equilibrium. Thermodynamic entropy has a definite value for such a body and is at its maximum value. When bodies of matter or radiation, initially in their own states of internal thermodynamic equilibrium, are brought together so as to intimately interact and reach a new joint equilibrium, then their total entropy increases. For example, a glass of warm water with an ice cube in it will have a lower entropy than that same system some time later when the ice has melted leaving a glass of cool water. Such processes are irreversible: A glass of cool water will not spontaneously turn into a glass of warm water with an ice cube in it. Some processes in nature are almost reversible. For example, the orbiting of the planets around the Sun may be thought of as practically reversible: A movie of the planets orbiting the Sun which is run in reverse would not appear to be impossible.
While the second law, and thermodynamics in general, accurately predicts the intimate interactions of complex physical systems, scientists are not content with simply knowing how a system behaves, they also want to know why it behaves the way it does. The question of why entropy increases until equilibrium is reached was answered in 1877 by physicist Ludwig Boltzmann. The theory developed by Boltzmann and others, is known as statistical mechanics. Statistical mechanics explains thermodynamics in terms of the statistical behavior of the atoms and molecules which make up the system. The theory not only explains thermodynamics, but also a host of other phenomena which are outside the scope of thermodynamics.
Explanation
Thermodynamic entropy
The concept of thermodynamic entropy arises from the second law of thermodynamics. This law of entropy increase quantifies the reduction in the capacity of an isolated compound thermodynamic system to do thermodynamic work on its surroundings, or indicates whether a thermodynamic process may occur. For example, whenever there is a suitable pathway, heat spontaneously flows from a hotter body to a colder one.
Thermodynamic entropy is measured as a change in entropy () to a system containing a sub-system which undergoes heat transfer to its surroundings (inside the system of interest). It is based on the macroscopic relationship between heat flow into the sub-system and the temperature at which it occurs summed over the boundary of that sub-system.
Following the formalism of Clausius, the basic calculation can be mathematically stated as:
where is the increase or decrease in entropy, is the heat added to the system or subtracted from it, and is temperature. The 'equals' sign and the symbol imply that the heat transfer should be so small and slow that it scarcely changes the temperature .
If the temperature is allowed to vary, the equation must be integrated over the temperature path. This calculation of entropy change does not allow the determination of absolute value, only differences. In this context, the Second Law of Thermodynamics may be stated that for heat transferred over any valid process for any system, whether isolated or not,
According to the first law of thermodynamics, which deals with the conservation of energy, the loss of heat will result in a decrease in the internal energy of the thermodynamic system. Thermodynamic entropy provides a comparative measure of the amount of decrease in internal energy and the corresponding increase in internal energy of the surroundings at a given temperature. In many cases, a visualization of the second law is that energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. When applicable, entropy increase is the quantitative measure of that kind of a spontaneous process: how much energy has been effectively lost or become unavailable, by dispersing itself, or spreading itself out, as assessed at a specific temperature. For this assessment, when the temperature is higher, the amount of energy dispersed is assessed as 'costing' proportionately less. This is because a hotter body is generally more able to do thermodynamic work, other factors, such as internal energy, being equal. This is why a steam engine has a hot firebox.
The second law of thermodynamics deals only with changes of entropy (). The absolute entropy (S) of a system may be determined using the third law of thermodynamics, which specifies that the entropy of all perfectly crystalline substances is zero at the absolute zero of temperature. The entropy at another temperature is then equal to the increase in entropy on heating the system reversibly from absolute zero to the temperature of interest.
Statistical mechanics and information entropy
Thermodynamic entropy bears a close relationship to the concept of information entropy (H). Information entropy is a measure of the "spread" of a probability density or probability mass function. Thermodynamics makes no assumptions about the atomistic nature of matter, but when matter is viewed in this way, as a collection of particles constantly moving and exchanging energy with each other, and which may be described in a probabilistic manner, information theory may be successfully applied to explain the results of thermodynamics. The resulting theory is known as statistical mechanics.
An important concept in statistical mechanics is the idea of the microstate and the macrostate of a system. If we have a container of gas, for example, and we know the position and velocity of every molecule in that system, then we know the microstate of that system. If we only know the thermodynamic description of that system, the pressure, volume, temperature, and/or the entropy, then we know the macrostate of that system. Boltzmann realized that there are many different microstates that can yield the same macrostate, and, because the particles are colliding with each other and changing their velocities and positions, the microstate of the gas is always changing. But if the gas is in equilibrium, there seems to be no change in its macroscopic behavior: No changes in pressure, temperature, etc. Statistical mechanics relates the thermodynamic entropy of a macrostate to the number of microstates that could yield that macrostate. In statistical mechanics, the entropy of the system is given by Ludwig Boltzmann's equation:
where S is the thermodynamic entropy, W is the number of microstates that may yield the macrostate, and is the Boltzmann constant. The natural logarithm of the number of microstates () is known as the information entropy of the system. This can be illustrated by a simple example:
If you flip two coins, you can have four different results. If H is heads and T is tails, we can have (H,H), (H,T), (T,H), and (T,T). We can call each of these a "microstate" for which we know exactly the results of the process. But what if we have less information? Suppose we only know the total number of heads?. This can be either 0, 1, or 2. We can call these "macrostates". Only microstate (T,T) will give macrostate zero, (H,T) and (T,H) will give macrostate 1, and only (H,H) will give macrostate 2. So we can say that the information entropy of macrostates 0 and 2 are ln(1) which is zero, but the information entropy of macrostate 1 is ln(2) which is about 0.69. Of all the microstates, macrostate 1 accounts for half of them.
It turns out that if you flip a large number of coins, the macrostates at or near half heads and half tails accounts for almost all of the microstates. In other words, for a million coins, you can be fairly sure that about half will be heads and half tails. The macrostates around a 50–50 ratio of heads to tails will be the "equilibrium" macrostate. A real physical system in equilibrium has a huge number of possible microstates and almost all of them are the equilibrium macrostate, and that is the macrostate you will almost certainly see if you wait long enough. In the coin example, if you start out with a very unlikely macrostate (like all heads, for example with zero entropy) and begin flipping one coin at a time, the entropy of the macrostate will start increasing, just as thermodynamic entropy does, and after a while, the coins will most likely be at or near that 50–50 macrostate, which has the greatest information entropy – the equilibrium entropy.
The macrostate of a system is what we know about the system, for example the temperature, pressure, and volume of a gas in a box. For each set of values of temperature, pressure, and volume there are many arrangements of molecules which result in those values. The number of arrangements of molecules which could result in the same values for temperature, pressure and volume is the number of microstates.
The concept of information entropy has been developed to describe any of several phenomena, depending on the field and the context in which it is being used. When it is applied to the problem of a large number of interacting particles, along with some other constraints, like the conservation of energy, and the assumption that all microstates are equally likely, the resultant theory of statistical mechanics is extremely successful in explaining the laws of thermodynamics.
Example of increasing entropy
Ice melting provides an example in which entropy increases in a small system, a thermodynamic system consisting of the surroundings (the warm room) and the entity of glass container, ice and water which has been allowed to reach thermodynamic equilibrium at the melting temperature of ice. In this system, some heat (δQ) from the warmer surroundings at 298 K (25 °C; 77 °F) transfers to the cooler system of ice and water at its constant temperature (T) of 273 K (0 °C; 32 °F), the melting temperature of ice. The entropy of the system, which is , increases by . The heat δQ for this process is the energy required to change water from the solid state to the liquid state, and is called the enthalpy of fusion, i.e. ΔH for ice fusion.
The entropy of the surrounding room decreases less than the entropy of the ice and water increases: the room temperature of 298 K is larger than 273 K and therefore the ratio, (entropy change), of for the surroundings is smaller than the ratio (entropy change), of for the ice and water system. This is always true in spontaneous events in a thermodynamic system and it shows the predictive importance of entropy: the final net entropy after such an event is always greater than was the initial entropy.
As the temperature of the cool water rises to that of the room and the room further cools imperceptibly, the sum of the over the continuous range, "at many increments", in the initially cool to finally warm water can be found by calculus. The entire miniature 'universe', i.e. this thermodynamic system, has increased in entropy. Energy has spontaneously become more dispersed and spread out in that 'universe' than when the glass of ice and water was introduced and became a 'system' within it.
Origins and uses
Originally, entropy was named to describe the "waste heat", or more accurately, energy loss, from heat engines and other mechanical devices which could never run with 100% efficiency in converting energy into work. Later, the term came to acquire several additional descriptions, as more was understood about the behavior of molecules on the microscopic level. In the late 19th century, the word "disorder" was used by Ludwig Boltzmann in developing statistical views of entropy using probability theory to describe the increased molecular movement on the microscopic level. That was before quantum behavior came to be better understood by Werner Heisenberg and those who followed. Descriptions of thermodynamic (heat) entropy on the microscopic level are found in statistical thermodynamics and statistical mechanics.
For most of the 20th century, textbooks tended to describe entropy as "disorder", following Boltzmann's early conceptualisation of the "motional" (i.e. kinetic) energy of molecules. More recently, there has been a trend in chemistry and physics textbooks to describe entropy as energy dispersal. Entropy can also involve the dispersal of particles, which are themselves energetic. Thus there are instances where both particles and energy disperse at different rates when substances are mixed together.
The mathematics developed in statistical thermodynamics were found to be applicable in other disciplines. In particular, information sciences developed the concept of information entropy, which lacks the Boltzmann constant inherent in thermodynamic entropy.
Classical calculation of entropy
When the word 'entropy' was first defined and used in 1865, the very existence of atoms was still controversial, though it had long been speculated that temperature was due to the motion of microscopic constituents and that "heat" was the transferring of that motion from one place to another. Entropy change, , was described in macroscopic terms that could be directly measured, such as volume, temperature, or pressure. However, today the classical equation of entropy, can be explained, part by part, in modern terms describing how molecules are responsible for what is happening:
is the change in entropy of a system (some physical substance of interest) after some motional energy ("heat") has been transferred to it by fast-moving molecules. So, .
Then, , the quotient of the motional energy ("heat") q that is transferred "reversibly" (rev) to the system from the surroundings (or from another system in contact with the first system) divided by T, the absolute temperature at which the transfer occurs.
"Reversible" or "reversibly" (rev) simply means that T, the temperature of the system, has to stay (almost) exactly the same while any energy is being transferred to or from it. That is easy in the case of phase changes, where the system absolutely must stay in the solid or liquid form until enough energy is given to it to break bonds between the molecules before it can change to a liquid or a gas. For example, in the melting of ice at 273.15 K, no matter what temperature the surroundings are – from 273.20 K to 500 K or even higher, the temperature of the ice will stay at 273.15 K until the last molecules in the ice are changed to liquid water, i.e., until all the hydrogen bonds between the water molecules in ice are broken and new, less-exactly fixed hydrogen bonds between liquid water molecules are formed. This amount of energy necessary for ice melting per mole has been found to be 6008 joules at 273 K. Therefore, the entropy change per mole is , or 22 J/K.
When the temperature is not at the melting or boiling point of a substance no intermolecular bond-breaking is possible, and so any motional molecular energy ("heat") from the surroundings transferred to a system raises its temperature, making its molecules move faster and faster. As the temperature is constantly rising, there is no longer a particular value of "T" at which energy is transferred. However, a "reversible" energy transfer can be measured at a very small temperature increase, and a cumulative total can be found by adding each of many small temperature intervals or increments. For example, to find the entropy change from 300 K to 310 K, measure the amount of energy transferred at dozens or hundreds of temperature increments, say from 300.00 K to 300.01 K and then 300.01 to 300.02 and so on, dividing the q by each T, and finally adding them all.
Calculus can be used to make this calculation easier if the effect of energy input to the system is linearly dependent on the temperature change, as in simple heating of a system at moderate to relatively high temperatures. Thus, the energy being transferred "per incremental change in temperature" (the heat capacity, ), multiplied by the integral of from to , is directly given by .
Alternate explanations of entropy
Thermodynamic entropy
A measure of energy unavailable for work: This is an often-repeated phrase which, although it is true, requires considerable clarification to be understood. It is only true for cyclic reversible processes, and is in this sense misleading. By "work" is meant moving an object, for example, lifting a weight, or bringing a flywheel up to speed, or carrying a load up a hill. To convert heat into work, using a coal-burning steam engine, for example, one must have two systems at different temperatures, and the amount of work you can extract depends on how large the temperature difference is, and how large the systems are. If one of the systems is at room temperature, and the other system is much larger, and near absolute zero temperature, then almost ALL of the energy of the room temperature system can be converted to work. If they are both at the same room temperature, then NONE of the energy of the room temperature system can be converted to work. Entropy is then a measure of how much energy cannot be converted to work, given these conditions. More precisely, for an isolated system comprising two closed systems at different temperatures, in the process of reaching equilibrium the amount of entropy lost by the hot system, multiplied by the temperature of the hot system, is the amount of energy that cannot converted to work.
An indicator of irreversibility: fitting closely with the 'unavailability of energy' interpretation is the 'irreversibility' interpretation. Spontaneous thermodynamic processes are irreversible, in the sense that they do not spontaneously undo themselves. Thermodynamic processes artificially imposed by agents in the surroundings of a body also have irreversible effects on the body. For example, when James Prescott Joule used a device that delivered a measured amount of mechanical work from the surroundings through a paddle that stirred a body of water, the energy transferred was received by the water as heat. There was scarce expansion of the water doing thermodynamic work back on the surroundings. The body of water showed no sign of returning the energy by stirring the paddle in reverse. The work transfer appeared as heat, and was not recoverable without a suitably cold reservoir in the surroundings. Entropy gives a precise account of such irreversibility.
Dispersal: Edward A. Guggenheim proposed an ordinary language interpretation of entropy that may be rendered as "dispersal of modes of microscopic motion throughout their accessible range". Later, along with a criticism of the idea of entropy as 'disorder', the dispersal interpretation was advocated by Frank L. Lambert, and is used in some student textbooks.
The interpretation properly refers to dispersal in abstract microstate spaces, but it may be loosely visualised in some simple examples of spatial spread of matter or energy. If a partition is removed from between two different gases, the molecules of each gas spontaneously disperse as widely as possible into their respectively newly accessible volumes; this may be thought of as mixing. If a partition, that blocks heat transfer between two bodies of different temperatures, is removed so that heat can pass between the bodies, then energy spontaneously disperses or spreads as heat from the hotter to the colder.
Beyond such loose visualizations, in a general thermodynamic process, considered microscopically, spontaneous dispersal occurs in abstract microscopic phase space. According to Newton's and other laws of motion, phase space provides a systematic scheme for the description of the diversity of microscopic motion that occurs in bodies of matter and radiation. The second law of thermodynamics may be regarded as quantitatively accounting for the intimate interactions, dispersal, or mingling of such microscopic motions. In other words, entropy may be regarded as measuring the extent of diversity of motions of microscopic constituents of bodies of matter and radiation in their own states of internal thermodynamic equilibrium.
Information entropy and statistical mechanics
As a measure of disorder: Traditionally, 20th century textbooks have introduced entropy as order and disorder so that it provides "a measurement of the disorder or randomness of a system". It has been argued that ambiguities in, and arbitrary interpretations of, the terms used (such as "disorder" and "chaos") contribute to widespread confusion and can hinder comprehension of entropy for most students. On the other hand, in a convenient though arbitrary interpretation, "disorder" may be sharply defined as the Shannon entropy of the probability distribution of microstates given a particular macrostate, in which case the connection of "disorder" to thermodynamic entropy is straightforward, but arbitrary and not immediately obvious to anyone unfamiliar with information theory.
Missing information: The idea that information entropy is a measure of how much one does not know about a system is quite useful.
If, instead of using the natural logarithm to define information entropy, we instead use the base 2 logarithm, then the information entropy is roughly equal to the average number of (carefully chosen ) yes/no questions that would have to be asked to get complete information about the system under study. In the introductory example of two flipped coins, the information entropy for the macrostate which contains one head and one tail, one would only need one question to determine its exact state, (e.g. is the first one heads?") and instead of expressing the entropy as ln(2) one could say, equivalently, that it is log2(2) which equals the number of binary questions we would need to ask: One. When measuring entropy using the natural logarithm (ln), the unit of information entropy is called a "nat", but when it is measured using the base-2 logarithm, the unit of information entropy is called a "shannon" (alternatively, "bit"). This is just a difference in units, much like the difference between inches and centimeters. (1 nat = log2e shannons). Thermodynamic entropy is equal to the Boltzmann constant times the information entropy expressed in nats. The information entropy expressed with the unit shannon (Sh) is equal to the number of yes–no questions that need to be answered in order to determine the microstate from the macrostate.
The concepts of "disorder" and "spreading" can be analyzed with this information entropy concept in mind. For example, if we take a new deck of cards out of the box, it is arranged in "perfect order" (spades, hearts, diamonds, clubs, each suit beginning with the ace and ending with the king), we may say that we then have an "ordered" deck with an information entropy of zero. If we thoroughly shuffle the deck, the information entropy will be about 225.6 shannons: We will need to ask about 225.6 questions, on average, to determine the exact order of the shuffled deck. We can also say that the shuffled deck has become completely "disordered" or that the ordered cards have been "spread" throughout the deck. But information entropy does not say that the deck needs to be ordered in any particular way. If we take our shuffled deck and write down the names of the cards, in order, then the information entropy becomes zero. If we again shuffle the deck, the information entropy would again be about 225.6 shannons, even if by some miracle it reshuffled to the same order as when it came out of the box, because even if it did, we would not know that. So the concept of "disorder" is useful if, by order, we mean maximal knowledge and by disorder we mean maximal lack of knowledge. The "spreading" concept is useful because it gives a feeling to what happens to the cards when they are shuffled. The probability of a card being in a particular place in an ordered deck is either 0 or 1, in a shuffled deck it is 1/52. The probability has "spread out" over the entire deck. Analogously, in a physical system, entropy is generally associated with a "spreading out" of mass or energy.
The connection between thermodynamic entropy and information entropy is given by Boltzmann's equation, which says that . If we take the base-2 logarithm of W, it will yield the average number of questions we must ask about the microstate of the physical system in order to determine its macrostate.
| Physical sciences | Thermodynamics | Physics |
1101085 | https://en.wikipedia.org/wiki/Orangery | Orangery | An orangery or orangerie is a room or dedicated building, historically where orange and other fruit trees are protected during the winter, as a large form of greenhouse or conservatory.<ref>Gervase Markham, in The Whole Art of Husbandry (London 1631) also recommends protecting other delicate fruiting trees— "Orange, Lemon, Pomegranate, Cynamon, Olive, Almond"— in "some low vaulted gallerie adjoining upon the Garden".</ref> In the modern day an orangery could refer to either a conservatory or greenhouse built to house fruit trees, or a conservatory or greenhouse meant for another purpose.
The orangery provided a luxurious extension of the normal range and season of woody plants, extending the protection which had long been afforded by the warmth offered from a masonry fruit wall. During the 17th century, fruits like orange, pomegranate, and bananas arrived in huge quantities to European ports. Since these plants were not adapted to the harsh European winters, orangeries were invented to protect and sustain them. The high cost of glass made orangeries a status symbol showing wealth and luxury. Gradually, due to technological advancements, orangeries became more of a classic architectural structure that enhanced the beauty of an estate garden, rather than a room used for wintering plants.
The orangery originated from the Renaissance gardens of Italy, when glass-making technology enabled sufficient expanses of clear glass to be produced. In the north, the Dutch led the way in developing expanses of window glass in orangeries, although the engravings illustrating Dutch manuals showed solid roofs, whether beamed or vaulted, and in providing stove heat rather than open fires. This soon created a situation where orangeries became symbols of status among the wealthy. The glazed roof, which afforded sunlight to plants that were not dormant, was a development of the early 19th century. The orangery at Dyrham Park, Gloucestershire, which had been provided with a slate roof as originally built about 1702, was given a glazed one about a hundred years later, after Humphrey Repton remarked that it was dark; although it was built to shelter oranges, it has always simply been called the "greenhouse" in modern times.
The 1617 Orangerie (now Musée de l'Orangerie) at the Palace of the Louvre inspired imitations that culminated in Europe's largest orangery, the Versailles Orangerie. Designed by Jules Hardouin-Mansart for Louis XIV's 3,000 orange trees at Versailles, its dimensions of were not eclipsed until the development of the modern greenhouse in the 1840s, and were quickly overshadowed by the glass architecture of Joseph Paxton, the designer of the 1851 Crystal Palace. His "great conservatory" at Chatsworth House was an orangery and glass house of monumental proportions.
The orangery, however, was not just a greenhouse but a symbol of prestige and wealth and a garden feature, in the same way as a summerhouse, folly, or "Grecian temple". Owners would conduct their guests there on tours of the garden to admire not only the fruits within but also the architecture outside. Often the orangery would contain fountains, grottos, and an area in which to entertain in inclement weather.
Earliest examples
As early as 1545, an orangery was built in Padua, Italy. The first orangeries were practical and not as ornamental as they later became. Most had no heating other than open fires.
In England, John Parkinson introduced the orangery to the readers of his Paradisus in Sole (1628), under the heading "Oranges". The trees might be planted against a brick wall and enclosed in winter with a plank shed covered with "cerecloth", a waxed precursor of tarpaulin, which must have been thought handsomer than the alternative: For that purpose, some keep them in great square boxes, and lift them to and fro by iron hooks on the sides, or cause them to be rowled by trundels, or small wheeles under them, to place them in a house or close gallery.
The building of orangeries became most widely fashionable after the end of the Eighty Years' War in 1648. The countries that started this trend were France, Germany, and the Netherlands, these countries being the ones that saw merchants begin importing large numbers of orange trees, banana plants, and pomegranates to cultivate for their beauty and scent.
Construction materials
Orangeries were generally built facing south to take advantage of the maximum possible light, and were constructed using brick or stone bases, brick or stone pillars, and a corbel gutter. They also featured large, tall windows to maximise available sunlight in the afternoons, with the north facing walls built without windows in a very heavy solid brick, or occasionally with much smaller windows to be able to keep the rooms warm. Insulation at these times was one of the biggest concerns for the building of these orangeries, straw became the main material used, and many had wooden shutters fitted to keep in the warmth. An early example of the type of construction can be seen at Kensington Palace, which also featured underfloor heating.
Contemporary domestic orangeries are also typically built using stone, brick, and hardwood, but developments in glass, other materials, and insulation technologies have produced viable alternatives to traditional construction. The main difference with a conservatory is in the construction of its roof – a conservatory will have more than 75 per cent of its roof glazed, while an orangery will have less than 75 per cent glazed. Domestic orangeries also typically feature a roof lantern. Improved design and insulation has also led to an increasing number of orangeries that are not built facing south, instead using light maximising techniques to make the most of available natural sunlight.
Early orangeries
The first examples were basic constructions and could be removed during summer. Notably not only noblemen but also wealthy merchants, e.g., those of Nuremberg, used to cultivate citrus plants in orangeries. Some orangeries were built using the garden wall as the main wall of the new orangery, but as orangeries became more and more popular they started to become more and more influenced by garden designers and architects, which led to the connection between the house and architectural orangery design. This became further influenced by the increased demand for beautiful exotic plants in the garden, which could be grown and looked after in the orangeries.
This created the increased demand in garden design for the wealthy to have their own exotic private gardens, further fuelling the status of the orangery becoming even more the symbol of the elite. This in turn created the need for orangeries to be constructed using even better techniques such as underfloor heating and the ability to have opening windows in the roofs for ventilation. Creating microclimates for the propagation of more and more exotic plants for the private gardens that were becoming creations of beauty all around Europe.
Continental Europe
Austria
Belvedere, Vienna
Schönbrunn, Vienna
France
Versailles Orangerie, in the gardens of the Palace of Versailles
Strasbourg, park of the Orangerie
Tuileries: Orangerie in the Tuileries Gardens, Paris
Belgium
Freÿr, Orangerie of the Château de Freÿr, the collection includes some of the oldest citrus trees kept in containers, dating back to around 1700.
Laeken, Orangerie of the Royal Castle of Laeken (ca. 1820), an exceptional collection of very tall and old citrus trees.
Mariemont, Orangerie of the Domaine de Mariemont (ca. 1850 in its present form)
Seneffe, Orangerie of the Château de Seneffe (ca. 1765)
Germany
Darmstadt, Orangerie
Düsseldorf-Benrath, Orangerie
Fulda, Orangerie
Gera, Orangery and "Küchengarten"
Hanover, a part of the Herrenhausen Gardens
Ingolstadt, Orangerie in Harderstraße 10
Kassel, Orangerie
Oldenburg, Cactus House
Philippsthal, Orangerie
Potsdam, Orangery Palace
Schwerin, Schwerin Castle, Orangerie
Weimar, Belvedere Orangerie
Wertheim am Main, Bronnbach abbey
Italy
Palace of Venaria, Citroneria (en: Orangery, built by Filippo Juvarra)
Poland
Warsaw, Stara Pomarańczarnia (en: Old Orangery; built 1786–1788) and Nowa Pomarańczarnia (en: New Orangery; built 1860) at the Royal Łazienki Park
Russia
Peterhof, Bolshaya Kamennya Oranzhereya
Tsarskoe Selo, Bolshaya Oranzhereya (1762, 1820)
Kuskovo, Moscow, Oranzhereya (illustration, right)
Sweden
Linneanum, Botaniska trädgården (Uppsala) – The Orangery, Botanical Garden, Uppsala University 1787
Linnéträdgården, Uppsala 1655
Finspång Castle Orangerie 1832
Nynäs Slott, Manorial Estate (Castle) and Orangery, Nynäs
Bergianska trädgården, Stockholm, gamla orangeriet, now used as a restaurant
Great Britain and Ireland
The orangery built adjacent to Kensington Palace, believed to be designed by Nicholas Hawksmoor, was constructed between 1704 and 1705.
The orangery at the Royal Botanic Gardens, Kew, was designed in 1761 by Sir William Chambers and at one time was the largest glasshouse in England.
The orangery at Margam Park, Wales, was built between 1787 and 1793 to house a large collection of orange, lemon, and citron trees inherited by Thomas Mansel Talbot. The original house has been razed, but the surviving orangery, at , is the longest one in Wales.
An orangery dating from about 1700 is at Kenwood House in London, and a slightly earlier one at Montacute. Other orangeries in the hands of the National Trust include:
Ham House, Richmond, Surrey, in brick, a somewhat less fancy building than others, placed at the end of the walled kitchen garden.
Hanbury Hall, Worcestershire
Croome Court, called the "Temple Greenhouse"; an elaborate Roman temple facade designed by Robert Adam in 1761.
Ickworth House, Suffolk, where it forms part of the garden front of the dwelling wings
Powis Castle, Montgomeryshire, a central feature on the late-18th-century terraces
Saltram House, Devon, probably to a Robert Adam design
Seaton Delaval Hall, Northumberland
Blickling, Norfolk
Gibside, near Newcastle-on-Tyne, now a ruined shell
In 1970, Victor Montagu constructed an orangery in his formal Italianate gardens at Mapperton, Dorset.
A mid-19th-century orangery at Norton Hall in Sheffield, England, has been converted to apartments.
In Ireland, orangeries were built at Killruddery House and Loughcrew House.
United States
18th century
In the United States, the earliest partially intact surviving orangery is at the Tayloe Family Seat, Mount Airy, but today is an overgrown ruin, consisting only of one major wall and portions of the others' foundations. A ruined orangery can also be seen in the gardens of Eyre Hall in Northampton County, Virginia.
The oldest-known extant orangery in America can be seen at the Wye House, near Tunis Mills (Easton), Maryland.Historic American Buildings Survey.Wye House, Orangery, Bruffs Island Road, Tunis Mills, Talbot County, MD 1936. The builder, Edward Lloyd IV had married Elizabeth Tayloe, the daughter of John Tayloe II builder of the aforementioned Mount Airy. This orangery sits behind the main house and consists of a large open room with two smaller wings added at some point after the initial construction. The south-facing wall consists of large triple-hung windows. A second story was traditionally part of the style of orangeries at the time of its construction in the middle to late 18th century as a way of further insulating the main section where the plants were kept. According to the current resident, Ms. Tilghman (a descendant of the Lloyd family), it served as a billiards room for the family. This plantation is also notable as having been the home of Frederick Douglass as a young slave boy.
George Washington designed and constructed an orangery for his home at Mount Vernon, Virginia. It was designed in the Georgian Style of architecture and stands just north of the mansion facing the upper garden. Completed in 1787, it is one of the largest buildings on the Mount Vernon estate. Washington grew lemon and orange trees and sago palms there. Considered an ambitious structure by his contemporaries, the main room featured a vaulted ceiling for air circulation, and incorporated radiant heating from a series of flues under the floor. The original greenhouse burned in 1835, but was rebuilt on the same site in 1951 using original plans.
19th century
The Dumbarton Oaks estate in Washington, D.C., includes an orangery built in 1810 that is now used to house gardenias, oleander, and citrus plants during the winter.
Another orangery stands at Hampton National Historic Site near Towson, Maryland. Originally built in 1820, it was part of one of the most extensive collections of citrus trees in the U.S. by the mid-19th century. The current structure is a reconstruction built in the 1970s to replace the original, which burned in 1926.
The orangery at the Battersea Historic Site in Petersburg, Virginia, is currently under restoration. Originally built between 1823 and 1841, it was converted into a garage in a later period.
In the late 19th century, Florence Vanderbilt and husband Hamilton Twombly built an orangerie on their estate, Florham, designed by architects McKim, Mead & White. It is now on the Florham Campus of Fairleigh Dickinson University.
20th century
An 18th-century style orangery was built in the 1980s at the Tower Hill Botanic Garden in Boylston, Massachusetts.
| Technology | Buildings and infrastructure | null |
1101364 | https://en.wikipedia.org/wiki/Gaussian%20units | Gaussian units | Gaussian units constitute a metric system of units of measurement. This system is the most common of the several electromagnetic unit systems based on the centimetre–gram–second system of units (CGS). It is also called the Gaussian unit system, Gaussian-cgs units, or often just cgs units. The term "cgs units" is ambiguous and therefore to be avoided if possible: there are several variants of CGS, which have conflicting definitions of electromagnetic quantities and units.
SI units predominate in most fields, and continue to increase in popularity at the expense of Gaussian units. Alternative unit systems also exist. Conversions between quantities in the Gaussian and SI systems are direct unit conversions, because the quantities themselves are defined differently in each system. This means that the equations that express physical laws of electromagnetism—such as Maxwell's equations—will change depending on the system of quantities that is employed. As an example, quantities that are dimensionless in one system may have dimension in the other.
Alternative unit systems
The Gaussian unit system is just one of several electromagnetic unit systems within CGS. Others include "electrostatic units", "electromagnetic units", and Heaviside–Lorentz units.
Some other unit systems are called "natural units", a category that includes atomic units, Planck units, and others.
The International System of Units (SI), with the associated International System of Quantities (ISQ), is by far the most common system of units today. In engineering and practical areas, SI is nearly universal and has been for decades. In technical, scientific literature (such as theoretical physics and astronomy), Gaussian units were predominant until recent decades, but are now getting progressively less so. The 8th SI Brochure mentions the CGS-Gaussian unit system, but the 9th SI Brochure makes no mention of CGS systems.
Natural units may be used in more theoretical and abstract fields of physics, particularly particle physics and string theory.
Major differences between Gaussian and SI systems
"Rationalized" unit systems
One difference between the Gaussian and SI systems is in the factor in various formulas that relate the quantities that they define. With SI electromagnetic units, called rationalized, Maxwell's equations have no explicit factors of in the formulae, whereas the inverse-square force laws – Coulomb's law and the Biot–Savart law – have a factor of attached to the . With Gaussian units, called unrationalized (and unlike Heaviside–Lorentz units), the situation is reversed: two of Maxwell's equations have factors of in the formulas, while both of the inverse-square force laws, Coulomb's law and the Biot–Savart law, have no factor of attached to in the denominator.
(The quantity appears because is the surface area of the sphere of radius , which reflects the geometry of the configuration. For details, see the articles Relation between Gauss's law and Coulomb's law and Inverse-square law.)
Unit of charge
A major difference between the Gaussian system and the ISQ is in the respective definitions of the quantity charge. In the ISQ, a separate base dimension, electric current, with the associated SI unit, the ampere, is associated with electromagnetic phenomena, with the consequence that a unit of electrical charge (1 coulomb = 1 ampere × 1 second) is a physical quantity that cannot be expressed purely in terms of the mechanical units (kilogram, metre, second). On the other hand, in the Gaussian system, the unit of electric charge (the statcoulomb, statC) be written entirely as a dimensional combination of the non-electrical base units (gram, centimetre, second), as:
For example, Coulomb's law in Gaussian units has no constant:
where is the repulsive force between two electrical charges, and are the two charges in question, and is the distance separating them. If and are expressed in statC and in centimetres, then the unit of that is coherent with these units is the dyne.
The same law in the ISQ is:
where is the vacuum permittivity, a quantity that is not dimensionless: it has dimension (charge)2 (time)2 (mass)−1 (length)−3. Without , the equation would be dimensionally inconsistent with the quantities as defined in the ISQ, whereas the quantity does not appear in Gaussian equations. This is an example of how some dimensional physical constants can be eliminated from the expressions of physical law by the choice of definition of quantities. In the ISQ, converts or scales electric flux density, , to the corresponding electric field, (the latter has dimension of force per charge), while in the Gaussian system, electric flux density is the same quantity as electric field strength in free space aside from a dimensionless constant factor.
In the Gaussian system, the speed of light appears directly in electromagnetic formulas like Maxwell's equations (see below), whereas in the ISQ it appears via the product .
Units for magnetism
In the Gaussian system, unlike the ISQ, the electric field and the magnetic field have the same dimension. This amounts to a factor of between how is defined in the two unit systems, on top of the other differences. (The same factor applies to other magnetic quantities such as the magnetic field, , and magnetization, .) For example, in a planar light wave in vacuum, in Gaussian units, while in the ISQ.
Polarization, magnetization
There are further differences between Gaussian system and the ISQ in how quantities related to polarization and magnetization are defined. For one thing, in the Gaussian system, all of the following quantities have the same dimension: , , , , , and . A further point is that the electric and magnetic susceptibility of a material is dimensionless in both the Gaussian system and the ISQ, but a given material will have a different numerical susceptibility in the two systems. (Equation is given below.)
List of equations
This section has a list of the basic formulae of electromagnetism, given in both the Gaussian system and the International System of Quantities (ISQ). Most symbol names are not given; for complete explanations and definitions, please click to the appropriate dedicated article for each equation. A simple conversion scheme for use when tables are not available may be found in Garg (2012).
All formulas except otherwise noted are from Ref.
Maxwell's equations
Here are Maxwell's equations, both in macroscopic and microscopic forms. Only the "differential form" of the equations is given, not the "integral form"; to get the integral forms apply the divergence theorem or Kelvin–Stokes theorem.
Other basic laws
Dielectric and magnetic materials
Below are the expressions for the various fields in a dielectric medium. It is assumed here for simplicity that the medium is homogeneous, linear, isotropic, and nondispersive, so that the permittivity is a simple constant.
where
and are the electric field and displacement field, respectively;
is the polarization density;
is the permittivity;
is the permittivity of vacuum (used in the SI system, but meaningless in Gaussian units); and
is the electric susceptibility.
The quantities and are both dimensionless, and they have the same numeric value. By contrast, the electric susceptibility and are both unitless, but have for the same material:
Next, here are the expressions for the various fields in a magnetic medium. Again, it is assumed that the medium is homogeneous, linear, isotropic, and nondispersive, so that the permeability is a simple constant.
where
and are the magnetic fields;
is magnetization;
is magnetic permeability;
is the permeability of vacuum (used in the SI system, but meaningless in Gaussian units); and
is the magnetic susceptibility.
The quantities and are both dimensionless, and they have the same numeric value. By contrast, the magnetic susceptibility and are both unitless, but has in the two systems for the same material:
Vector and scalar potentials
The electric and magnetic fields can be written in terms of a vector potential and a scalar potential :
Electrical circuit
where
is the electric charge
is the electric current
is the electric potential
is the magnetic flux
is the electrical resistance
is the capacitance
is the inductance
Fundamental constants
Electromagnetic unit names
Note: The SI quantities and satisfy .
The conversion factors are written both symbolically and numerically. The numerical conversion factors can be derived from the symbolic conversion factors by dimensional analysis. For example, the top row says a relation which can be verified with dimensional analysis, by expanding and coulombs (C) in SI base units, and expanding statcoulombs (or franklins, Fr) in Gaussian base units.
It is surprising to think of measuring capacitance in centimetres. One useful example is that a centimetre of capacitance is the capacitance between a sphere of radius 1 cm in vacuum and infinity.
Another surprising unit is measuring resistivity in units of seconds. A physical example is: Take a parallel-plate capacitor, which has a "leaky" dielectric with permittivity 1 but a finite resistivity. After charging it up, the capacitor will discharge itself over time, due to current leaking through the dielectric. If the resistivity of the dielectric is seconds, the half-life of the discharge is seconds. This result is independent of the size, shape, and charge of the capacitor, and therefore this example illuminates the fundamental connection between resistivity and time units.
Dimensionally equivalent units
A number of the units defined by the table have different names but are in fact dimensionally equivalent – i.e., they have the same expression in terms of the base units cm, g, s. (This is analogous to the distinction in SI between newton-metre and joule.) The different names help avoid ambiguities and misunderstandings as to what physical quantity is being measured. In particular, of the following quantities are dimensionally equivalent in Gaussian units, but they are nevertheless given different unit names as follows:
General rules to translate a formula
Any formula can be converted between Gaussian and SI units by using the symbolic conversion factors from Table 1 above.
For example, the electric field of a stationary point charge has the ISQ formula
where is distance, and the "" superscript indicates that the electric field and charge are defined as in the ISQ. If we want the formula to instead use the Gaussian definitions of electric field and charge, we look up how these are related using Table 1, which says:
Therefore, after substituting and simplifying, we get the Gaussian-system formula:
which is the correct Gaussian-system formula, as mentioned in a previous section.
For convenience, the table below has a compilation of the symbolic conversion factors from Table 1. To convert any formula from the Gaussian system to the ISQ using this table, replace each symbol in the Gaussian column by the corresponding expression in the SI column (vice versa to convert the other way). Replace by (or vice versa). This will reproduce any of the specific formulas given in the list above, such as Maxwell's equations, as well as any other formula not listed.
After the rules of the table have been applied and the resulting formula has been simplified, replace all combinations by .
| Physical sciences | Measurement systems | Basics and measurement |
1101849 | https://en.wikipedia.org/wiki/Cyclic%20voltammetry | Cyclic voltammetry | In electrochemistry, cyclic voltammetry (CV) is a type of voltammetric measurement where the potential of the working electrode is ramped linearly versus time. Unlike in linear sweep voltammetry, after the set potential is reached in a CV experiment, the working electrode's potential is ramped in the opposite direction to return to the initial potential. These cycles in potential are repeated until the voltammetric trace reaches a cyclic steady state. The current at the working electrode is plotted versus the voltage at the working electrode to yield the cyclic voltammogram (see Figure 1). Cyclic voltammetry is generally used to study the electrochemical properties of an analyte in solution or of a molecule that is adsorbed onto the electrode.
Experimental method
In cyclic voltammetry (CV), the electrode potential is ramped linearly versus time in cyclical phases (blue trace in Figure 2). The rate of voltage change over time during each of these phases is known as the scan rate (V/s). In a standard three-electrode cell, the potential is measured between the working electrode and the reference electrode, while the current is measured between the working electrode and the counter electrode. These data are plotted as current density (j, mA/cm2) versus potential (typically corrected for Ohmic/iR drop) (E, V). In Figure 2, during the initial forward scan from t0 to t1, an increasingly oxidative (positive) potential is applied, and the anodic (positive) current increases over this time period due to the charging of the electric double layer. The spike in anodic (positive) current observed between t0 and t1 is due to the oxidation of the analyte in the solution when the correct potential is reached. The current decreases after the initial spike as the concentration of oxidable analyte is depleted near the surface of the working electrode due to mass transport limitations. If the redox couple is reversible, then during the reverse scan (from t1 to t2), the oxidized analyte will start to be re-reduced, giving rise to a cathodic current of opposite polarity. The more reversible the redox couple is, the more similar the oxidation peak will be in shape to the reduction peak. Hence, CV data can provide information about redox potentials and electrochemical reaction rates.
For instance, if the electron transfer at the working electrode surface is fast and the current is limited by the diffusion of analyte species to the electrode surface, then the peak current will be proportional to the square root of the scan rate. This relationship is described by the Randles–Sevcik equation. In this situation, the CV experiment only samples a small portion of the solution, i.e., the diffusion layer at the electrode surface.
Characterization
The utility of cyclic voltammetry is highly dependent on the analyte being studied. The analyte has to be redox active within the potential window to be scanned.
The analyte is in solution
Reversible couples
Often the analyte displays a reversible CV wave (such as that depicted in Figure 1), which is observed when all of the initial analyte can be recovered after a forward and reverse scan cycle. Although such reversible couples are simpler to analyze, they contain less information than more complex waveforms.
The waveform of even reversible couples is complex owing to the combined effects of polarization and diffusion. The difference between the two peak potentials (Ep), ΔEp, is of particular interest.
ΔEp = Epa - Epc > 0
This difference mainly results from the effects of analyte diffusion rates. In the ideal case of a reversible 1e- couple (i.e., Nernstian), ΔEp is 57 mV and the full-width half-max of the forward scan peak is 59 mV. Typical values observed experimentally are greater, often approaching 70 or 80 mV. The waveform is also affected by the rate of electron transfer, usually discussed as the activation barrier for electron transfer. A theoretical description of polarization overpotential is in part described by the Butler–Volmer equation and Cottrell equation. In an ideal system the relationship reduces to for an n electron process.
Focusing on current, reversible couples are characterized by ipa/ipc = 1.
When a reversible peak is observed, thermodynamic information in the form of a half cell potential E01/2 can be determined. When waves are semi-reversible (ipa/ipc is close but not equal to 1), it may be possible to determine even more specific information (see electrochemical reaction mechanism).
The current maxima for oxidation and reduction itself depend on the scan rate, see the figure.
To study the nature of the electrochemical reaction mechanism it is useful to perform a power fit according to
A fit with in the figure shows the proportionality of the peak currents to the square root of the scan rate when additionally is fulfilled.
This leads to the so called Randles–Sevcik equation and the rate determining step of this electrochemical redox reaction can be assigned to diffusion.
Nonreversible couples
Many redox processes observed by CV are quasi-reversible or non-reversible. In such cases the thermodynamic potential E01/2 is often deduced by simulation. The irreversibility is indicated by ipa/ipc ≠ 1. Deviations from unity are attributable to a subsequent chemical reaction that is triggered by the electron transfer. Such EC processes can be complex, involving isomerization, dissociation, association, etc.
The analyte is adsorbed onto the electrode surface
Adsorbed species give simple voltammetric responses: ideally, at slow scan rates, there is no peak separation, the peak width is 90mV for a one-electron redox couple, and the peak current and peak area are proportional to scan rate (observing that the peak current is proportional to scan rate proves that the redox species that gives the peak is actually immobilised). The effect of increasing the scan rate can be used to measure the rate of interfacial electron transfer and/or the rates of reactions that are coupltransfer. This technique has been useful to study redox proteins, some of which readily adsorb on various electrode materials, but the theory for biological and non-biological redox molecules is the same (see the page about protein film voltammetry).
Experimental setup
CV experiments are conducted on a solution in a cell fitted with electrodes. The solution consists of the solvent, in which is dissolved electrolyte and the species to be studied.
The cell
A standard CV experiment employs a cell fitted with three electrodes: reference electrode, working electrode, and counter electrode. This combination is sometimes referred to as a three-electrode setup. Electrolyte is usually added to the sample solution to ensure sufficient conductivity. The solvent, electrolyte, and material composition of the working electrode will determine the potential range that can be accessed during the experiment.
The electrodes are immobile and sit in unstirred solutions during cyclic voltammetry. This "still" solution method gives rise to cyclic voltammetry's characteristic diffusion-controlled peaks. This method also allows a portion of the analyte to remain after reduction or oxidation so that it may display further redox activity. Stirring the solution between cyclic voltammetry traces is important in order to supply the electrode surface with fresh analyte for each new experiment. The solubility of an analyte can change drastically with its overall charge; as such it is common for reduced or oxidized analyte species to precipitate out onto the electrode. This layering of analyte can insulate the electrode surface, display its own redox activity in subsequent scans, or otherwise alter the electrode surface in a way that affects the CV measurements. For this reason it is often necessary to clean the electrodes between scans.
Common materials for the working electrode include glassy carbon, platinum, and gold. These electrodes are generally encased in a rod of inert insulator with a disk exposed at one end. A regular working electrode has a radius within an order of magnitude of 1 mm. Having a controlled surface area with a well-defined shape is necessary for being able to interpret cyclic voltammetry results.
To run cyclic voltammetry experiments at very high scan rates a regular working electrode is insufficient. High scan rates create peaks with large currents and increased resistances, which result in distortions. Ultramicroelectrodes can be used to minimize the current and resistance.
The counter electrode, also known as the auxiliary or second electrode, can be any material that conducts current easily, will not react with the bulk solution, and has a surface area much larger than the working electrode. Common choices are platinum and graphite. Reactions occurring at the counter electrode surface are unimportant as long as it continues to conduct current well. To maintain the observed current the counter electrode will often oxidize or reduce the solvent or bulk electrolyte.
Solvents
CV can be conducted using a variety of solutions. Solvent choice for cyclic voltammetry takes into account several requirements. The solvent must dissolve the analyte and high concentrations of the supporting electrolyte. It must also be stable in the potential window of the experiment with respect to the working electrode. It must not react with either the analyte or the supporting electrolyte. It must be pure to prevent interference.
Electrolyte
The electrolyte ensures good electrical conductivity and minimizes iR drop such that the recorded potentials correspond to actual potentials. For aqueous solutions, many electrolytes are available, but typical ones are alkali metal salts of perchlorate and nitrate. In nonaqueous solvents, the range of electrolytes is more limited, and a popular choice is tetrabutylammonium hexafluorophosphate.
Related potentiometric techniques
Potentiodynamic techniques also exist that add low-amplitude AC perturbations to a potential ramp and measure variable response in a single frequency (AC voltammetry) or in many frequencies simultaneously (potentiodynamic electrochemical impedance spectroscopy). The response in alternating current is two-dimensional, characterized by both amplitude and phase. These data can be analyzed to determine information about different chemical processes (charge transfer, diffusion, double layer charging, etc.). Frequency response analysis enables simultaneous monitoring of the various processes that contribute to the potentiodynamic AC response of an electrochemical system.
Whereas cyclic voltammetry is not hydrodynamic voltammetry, useful electrochemical methods are. In such cases, flow is achieved at the electrode surface by stirring the solution, pumping the solution, or rotating the electrode as is the case with rotating disk electrodes and rotating ring-disk electrodes. Such techniques target steady state conditions and produce waveforms that appear the same when scanned in either the positive or negative directions, thus limiting them to linear sweep voltammetry.
Applications
Cyclic voltammetry (CV) has become an important and widely used electroanalytical technique in many areas of chemistry. It is often used to study a variety of redox processes, to determine the stability of reaction products, the presence of intermediates in redox reactions, electron transfer kinetics, and the reversibility of a reaction. It can be used for electrochemical deposition of thin films or for determining suitable reduction potential range of the ions present in electrolyte for electrochemical deposition. CV can also be used to determine the electron stoichiometry of a system, the diffusion coefficient of an analyte, and the formal reduction potential of an analyte, which can be used as an identification tool. In addition, because concentration is proportional to current in a reversible, Nernstian system, the concentration of an unknown solution can be determined by generating a calibration curve of current vs. concentration.
In cellular biology it is used to measure the concentrations in living organisms. In organometallic chemistry, it is used to evaluate redox mechanisms.
Measuring antioxidant capacity
Cyclical voltammetry can be used to determine the antioxidant capacity in food and even skin. Low molecular weight antioxidants, molecules that prevent other molecules from being oxidized by acting as reducing agents, are important in living cells because they inhibit cell damage or death caused by oxidation reactions that produce radicals. Examples of antioxidants include flavonoids, whose antioxidant activity is greatly increased with more hydroxyl groups. Because traditional methods to determine antioxidant capacity involve tedious steps, techniques to increase the rate of the experiment are continually being researched. One such technique involves cyclic voltammetry because it can measure the antioxidant capacity by quickly measuring the redox behavior over a complex system without the need to measure each component's antioxidant capacity. Furthermore, antioxidants are quickly oxidized at inert electrodes, so the half-wave potential can be utilized to determine antioxidant capacity. It is important to note that whenever cyclic voltammetry is utilized, it is usually compared to spectrophotometry or high-performance liquid chromatography (HPLC). Applications of the technique extend to food chemistry, where it is used to determine the antioxidant activity of red wine, chocolate, and hops. Additionally, it even has uses in the world of medicine in that it can determine antioxidants in the skin.
Evaluation of a technique
The technique being evaluated uses voltammetric sensors combined in an electronic tongue (ET) to observe the antioxidant capacity in red wines. These electronic tongues (ETs) consist of multiple sensing units like voltammetric sensors, which will have unique responses to certain compounds. This approach is optimal to use since samples of high complexity can be analyzed with high cross-selectivity. Thus, the sensors can be sensitive to pH and antioxidants. As usual, the voltage in the cell was monitored using a working electrode and a reference electrode (silver/silver chloride electrode). Furthermore, a platinum counter electrode allows the current to continue to flow during the experiment. The Carbon Paste Electrodes sensor (CPE) and the Graphite-Epoxy Composite (GEC) electrode are tested in a saline solution before the scanning of the wine so that a reference signal can be obtained. The wines are then ready to be scanned, once with CPE and once with GEC. While cyclic voltammetry was successfully used to generate currents using the wine samples, the signals were complex and needed an additional extraction stage. It was found that the ET method could successfully analyze wine's antioxidant capacity as it agreed with traditional methods like TEAC, Folin-Ciocalteu, and I280 indexes. Additionally, the time was reduced, the sample did not have to be pretreated, and other reagents were unnecessary, all of which diminished the popularity of traditional methods. Thus, cyclic voltammetry successfully determines the antioxidant capacity and even improves previous results.
Antioxidant capacity of chocolate and hops
The phenolic antioxidants for cocoa powder, dark chocolate, and milk chocolate can also be determined via cyclic voltammetry. In order to achieve this, the anodic peaks are calculated and analyzed with the knowledge that the first and third anodic peaks can be assigned to the first and second oxidation of flavonoids, while the second anodic peak represents phenolic acids. Using the graph produced by cyclic voltammetry, the total phenolic and flavonoid content can be deduced in each of the three samples. It was observed that cocoa powder and dark chocolate had the highest antioxidant capacity since they had high total phenolic and flavonoid content. Milk chocolate had the lowest capacity as it had the lowest phenolic and flavonoid content. While the antioxidant content was given using the cyclic voltammetry anodic peaks, HPLC must then be used to determine the purity of catechins and procyanidin in cocoa powder, dark chocolate, and milk chocolate.
Hops, the flowers used in making beer, contain antioxidant properties due to the presence of flavonoids and other polyphenolic compounds. In this cyclic voltammetry experiment, the working electrode voltage was determined using a ferricinium/ferrocene reference electrode. By comparing different hop extract samples, it was observed that the sample containing polyphenols that were oxidized at less positive potentials proved to have better antioxidant capacity.
| Physical sciences | Electrical methods | Chemistry |
1101949 | https://en.wikipedia.org/wiki/Orthoceras | Orthoceras | Orthoceras is a genus of extinct nautiloid cephalopod restricted to Middle Ordovician-aged marine limestones of the Baltic States and Sweden. This genus is sometimes called Orthoceratites. Note it is sometimes misspelled as Orthocera, Orthocerus or Orthoceros.
Orthoceras was formerly thought to have had a worldwide distribution due to the genus' use as a wastebasket taxon for numerous species of conical-shelled nautiloids throughout the Paleozoic and Triassic. Since this work was carried out and re-cataloging of the genus, Orthoceras sensu stricto refers to Orthoceras regulare, of Ordovician-aged Baltic Sea limestones of Sweden and neighboring areas.
These are slender, elongate shells with the middle of the body chamber transversely constricted, and a subcentral orthochoanitic siphuncle. The surface is ornamented by a network of fine lirae . Many other very similar species are included under the genus Michelinoceras.
History of the name
Originally Orthoceras referred to all nautiloids with a straight-shell, called an "orthocone" . But later research on their internal structures, such as the siphuncle, cameral deposits, and others, showed that these actually belong to a number of groups, even different orders.
According to the authoritative Treatise on Invertebrate Paleontology, the name Orthoceras is now only used to refer to the type species O. regulare from the Middle Ordovician of Sweden and parts of the former Soviet Union such as Russia, Ukraine, Belarus, Estonia and Lithuania. The genus might include a few related species.
Confusion with Baculites
Orthoceras and related orthoconic nautiloid cephalopods
are often confused with the superficially similar Baculites and related Cretaceous orthoconic ammonoids. Both are long and tubular in form, and both are common items for sale in rock shops (often under each other's names). Both lineages evidently evolved the tubular form independently of one another, and at different times in earth history. Orthoceras lived much earlier (Middle Ordovician) than Baculites (Late Cretaceous). The two types of fossils can be distinguished by many features, most obvious among which is the suture line: simple in Orthoceras (see image), intricately foliated in Baculites and related forms.
| Biology and health sciences | Cephalopods | Animals |
1102339 | https://en.wikipedia.org/wiki/Volcanic%20arc | Volcanic arc | A volcanic arc (also known as a magmatic arc) is a belt of volcanoes formed above a subducting oceanic tectonic plate, with the belt arranged in an arc shape as seen from above. Volcanic arcs typically parallel an oceanic trench, with the arc located further from the subducting plate than the trench. The oceanic plate is saturated with water, mostly in the form of hydrous minerals such as micas, amphiboles, and serpentines. As the oceanic plate is subducted, it is subjected to increasing pressure and temperature with increasing depth. The heat and pressure break down the hydrous minerals in the plate, releasing water into the overlying mantle. Volatiles such as water drastically lower the melting point of the mantle, causing some of the mantle to melt and form magma at depth under the overriding plate. The magma ascends to form an arc of volcanoes parallel to the subduction zone.
Volcanic arcs are distinct from volcanic chains formed over hotspots in the middle of a tectonic plate. Volcanoes often form one after another as the plate moves over the hotspot, and so the volcanoes progress in age from one end of the chain to the other. The Hawaiian Islands form a typical hotspot chain, with the older islands to the northwest and Hawaii Island itself, which is just 400,000 years old, at the southeast end of the chain over the hotspot. Volcanic arcs do not generally exhibit such a simple age-pattern.
There are two types of volcanic arcs:
intraoceanic arcs (primitive arcs) form when oceanic crust subducts beneath other oceanic crust on an adjacent plate, creating a volcanic island arc.
continental arcs form when oceanic crust subducts beneath continental crust on an adjacent plate, creating an arc-shaped mountain belt.
In some situations, a single subduction zone may show both aspects along its length, as part of a plate subducts beneath a continent and part beneath adjacent oceanic crust. The Aleutian Islands and adjoining Alaskan Peninsula are an example of such a subduction zone.
The active front of a volcanic arc is the belt where volcanism develops at a given time. Active fronts may move over time (millions of years), changing their distance from the oceanic trench as well as their width.
Tectonic setting
A volcanic arc is part of an arc-trench complex, which is the part of a subduction zone that is visible at the Earth's surface. A subduction zone is where a tectonic plate composed of relatively thin, dense oceanic lithosphere sinks into the Earth's mantle beneath a less dense overriding plate. The overriding plate may be either another oceanic plate or a continental plate. The subducting plate, or slab, sinks into the mantle at an angle, so that there is a wedge of mantle between the slab and the overriding plate.
The boundary between the subducting plate and the overriding plate coincides with a deep and narrow oceanic trench. This trench is created by the gravitational pull of the relatively dense subducting plate pulling the leading edge of the plate downward. Multiple earthquakes occur within the subducting slab with the seismic hypocenters located at increasing depth under the island arc: these quakes define the Wadati–Benioff zones. The volcanic arc forms on the overriding plate over the point where the subducting plate reaches a depth of roughly and is a zone of volcanic activity between in width.
The shape of a volcanic arc is typically convex towards the subducting plate. This is a consequence of the spherical geometry of the Earth. The subducting plate behaves like a flexible thin spherical shell, and such a shell be bent downwards by an angle of θ, without tearing or wrinkling, only on a circle whose radius is θ/2. This means that arcs where the subducting slab descends at a shallower angle will be more tightly curved. Prominent arcs whose slabs subduct at about 45 degrees, such as the Kuril Islands, the Aleutian Islands, and the Sunda Arc, have a radius of about 20 to 22 degrees.
Volcanic arcs are divided into those in which the overriding plate is continental (Andean-type arcs) and those in which the overriding plate is oceanic (intraoceanic or primitive arcs). The crust beneath the arc is up to twice as thick as average continental or oceanic crust: The crust under Andean-type arcs is up to thick, while the crust under intraoceanic arcs is thick. Both shortening of the crust and magmatic underplating contribute to thickening of the crust.
Volcanic arcs are characterized by explosive eruption of calc-alkaline magma, though young arcs sometimes erupt tholeiitic magma and a few arcs erupt alkaline magma. Calc-alkaline magma can be distinguished from tholeiitic magma, typical of mid-ocean ridges, by its higher aluminium and lower iron content and by its high content of large-ion lithophile elements, such as potassium, rubidium, caesium, strontium, or barium, relative to high-field-strength elements, such as zirconium, niobium, hafnium, rare-earth elements (REE), thorium, uranium, or tantalum. Andesite is particularly characteristic of volcanic arcs, though it sometimes also occurs in regions of crustal extension.
In the rock record, volcanic arcs can be recognized from their thick sequences of volcaniclastic rock (formed by explosive volcanism) interbedded with greywackes and mudstones and by their calc-alkaline composition. In more ancient rocks that have experienced metamorphism and alteration of their composition (metasomatism), calc-alkaline rocks can be distinguished by their content of trace elements that are little affected by alteration, such as chromium or titanium, whose content is low in volcanic arc rocks. Because volcanic rock is easily weathered and eroded, older volcanic arcs are seen as plutonic rocks, the rocks that formed underneath the arc (e.g. the Sierra Nevada batholith), or in the sedimentary record as lithic sandstones. Paired metamorphic belts, in which a belt of high-temperature, low-pressure metamorphism is located parallel to a belt of low-temperature, high-pressure metamorphism, preserve an ancient arc-trench complex in which the high-temperature, low-pressure belt corresponds to the volcanic arc.
Petrology
In a subduction zone, loss of water from the subducted slab induces partial melting of the overriding mantle and generates low-density, calc-alkaline magma that buoyantly rises to intrude and be extruded through the lithosphere of the overriding plate. Most of the water carried downwards by the slab is contained in hydrous (water-bearing) minerals, such as mica, amphibole, or serpentinite minerals. Water is lost from the subducted plate when the temperature and pressure become sufficient to break down these minerals and release their water content. The water rises into the wedge of mantle overlying the slab and lowers the melting point of mantle rock to the point where magma is generated.
While there is wide agreement on the general mechanism, research continues on the explanation for focused volcanism along a narrow arc some distance from the trench. The distance from the trench to the volcanic arc is greater for slabs subducting at a shallower angle, and this suggests that magma generation takes place when the slab reached a critical depth for the breakdown of an abundant hydrous mineral. This would produce an ascending "hydrous curtain" that accounts for focused volcanism along the volcanic arc. However, some models suggest that water is continuously released from the slab from shallow depths down to , and much of the water released at shallow depths produces serpentinization of the overlying mantle wedge. According to one model, only about 18 to 37 percent of the water content is released at sufficient depth to produce arc magmatism. The volcanic arc is then interpreted as the depth at which the degree of melting becomes great enough to allow the magma to separate from its source rock.
It is now known that the subducting slab may be located anywhere from below the volcanic arc, rather than a single characteristic depth of around , which requires more elaborate models of arc magmatism. For example, water released from the slab at moderate depths might react with amphibole minerals in the lower part of the mantle wedge to produce water-rich chlorite. This chlorite-rich mantle rock is then dragged downwards by the subducting slab, and eventually breaks down to become the source of arc magmatism. The location of the arc depends on the angle and rate of subduction, which determine where hydrous minerals break down and where the released water lowers the melting point of the overlying mantle wedge enough for melting.
The location of the volcanic arc may be determined by the presence of a cool shallow corner at the tip of the mantle wedge, where the mantle rock is cooled by both the overlying plate and the slab. Not only does the cool shallow corner suppress melting, but its high stiffness hinders the ascent of any magma that is formed. Arc volcanism takes place where the slab descends out from under the cool shallow corner, allowing magma to be generated and rise through warmer, less stiff mantle rock.
Magma may be generated over a broad area but become focused into a narrow volcanic arc by a permeability barrier at the base of the overriding plate. Numerical simulations suggest that crystallization of rising magma creates this barrier, causing the remaining magma to pool in a narrow band at the apex of the barrier. This narrow band corresponds to the overlying volcanic arc.
Examples
Two classic examples of oceanic island arcs are the Mariana Islands in the western Pacific Ocean and the Lesser Antilles in the western Atlantic Ocean. The Cascade Volcanic Arc in western North America and the Andes along the western edge of South America are examples of continental volcanic arcs. The best examples of volcanic arcs with both sets of characteristics are in the North Pacific, with the Aleutian Arc consisting of the Aleutian Islands and their extension the Aleutian Range on the Alaska Peninsula, and the Kuril–Kamchatka Arc comprising the Kuril Islands and southern Kamchatka Peninsula.
Continental arcs
Cascade Volcanic Arc
Alaska Peninsula and Aleutian Range
Kamchatka
Andes
Northern Volcanic Zone
Central Volcanic Zone
Southern Volcanic Zone
Austral Volcanic Zone
Central America Volcanic Arc
Trans-Mexican Volcanic Belt
Island arcs
Pacific Ocean
Aleutian Islands
Kuril Islands
Northeastern Japan Arc
Japanese Archipelago including the Ryukyu Islands
Izu–Bonin–Mariana Arc:
Izu Islands
Bonin Islands
Mariana Islands
Luzon Volcanic Arc
Philippines
Tonga and Kermadec Islands
Solomon Islands
Indian Ocean
Andaman and Nicobar Islands
Mentawai Islands
Sunda Arc
Lesser Sunda Islands
Tanimbar and Kai Islands
Mascarene Islands
Mediterranean
Aeolian Islands
South Aegean Volcanic Arc
Atlantic Ocean
Lesser Antilles, including the Leeward Antilles
Scotia Arc
South Sandwich Islands
Ancient island arcs
Insular Islands
Intermontane Islands
Sakhalin Island Arc
| Physical sciences | Volcanology | Earth science |
1102522 | https://en.wikipedia.org/wiki/Pack%20animal | Pack animal | A pack animal, also known as a sumpter animal or beast of burden, is a working animal used to transport goods or materials by carrying them, usually on its back.
Domestic animals of many species are used in this way, among them alpacas, Bactrian camels, donkeys, dromedaries, gayal, goats, horses, llamas, mules, reindeer, water buffaloes and yaks.
Diversity
Traditional pack animals include ungulates such as camels, the domestic yak, reindeer, goats, water buffaloes, and llama, and domesticated members of the horse family including horses, donkeys, and mules. Occasionally, dogs can be used to carry small loads.
Pack animals by region
Arctic – reindeer and sled dogs
Central Africa and Southern Africa – oxen, mules, donkeys
Eurasia – donkeys, oxen, horses, mules
Central Asia – Bactrian camels, yaks, horses, mules, donkeys
South and Southeast Asia – water buffaloes, yaks, Asian elephants
North America – horses, mules, donkeys, goats
North Africa and Middle East – dromedaries, horses, donkeys, mules, oxen
Oceania – donkeys, horses, dromedaries, mules, oxen
South America – llamas, donkeys, mules
Uses
Hauling of goods in wagons with horses and oxen gradually displaced the use of packhorses, which had been important until the Middle Ages, by the sixteenth century.
Pack animals may be fitted with pack saddles and may also carry saddlebags. Alternatively, a pair of weighted materials (often placed symmetrically) are called panniers.
While traditional usage of pack animals by nomadic tribespeople is declining, a new market is growing in the tourist expeditions industry in regions such as the High Atlas mountains of Morocco, allowing visitors the comfort of backpacking with animals. The use of pack animals "is considered a valid means of viewing and experiencing" some National Parks in America, subject to guidelines and closed areas.
In the 21st century, special forces have received guidance on the use of horses, mules, llamas, camels, dogs, and elephants as pack animals.
Load carrying capacity
The maximum load for a camel is roughly .
Yaks are loaded differently according to region. In Sichuan, is carried for in 6 hours. In Qinghai, at altitude, packs of up to are routinely carried, while up to is carried by the heaviest steers for short periods.
Llamas can carry roughly a quarter of their body weight, so an adult male of can carry some .
Loads for equids are disputed. The US Army specifies a maximum of 20 percent of body weight for mules walking up to a day in mountains, giving a load of up to about . However an 1867 text mentioned a load of up to . In India, the prevention of cruelty rules (1965) limit mules to and ponies to .
Reindeer can carry up to for a prolonged period in mountains.
| Technology | Agriculture, labor and economy | null |
1102900 | https://en.wikipedia.org/wiki/Notostraca | Notostraca | The order Notostraca, containing the single family Triopsidae, is a group of crustaceans known as tadpole shrimp or shield shrimp. The two genera, Triops and Lepidurus, are considered living fossils, with similar forms having existed since the end of the Devonian, around 360 million years ago. They have a broad, flat carapace, which conceals the head and bears a single pair of compound eyes. The abdomen is long, appears to be segmented and bears numerous pairs of flattened legs. The telson is flanked by a pair of long, thin caudal rami. Phenotypic plasticity within taxa makes species-level identification difficult, and is further compounded by variation in the mode of reproduction. Notostracans are omnivores living on the bottom of temporary pools and shallow lakes.
Description
Notostracans are long, with a broad carapace at the front end, and a long, slender abdomen. This gives them a similar overall shape to a tadpole, from which the common name tadpole shrimp derives. The carapace is dorso-ventrally flattened, smooth, and bears no rostrum; it includes the head, and the two sessile compound eyes are located together on top of the head. The two pairs of antennae are much reduced, with the second pair sometimes missing altogether. The mouthparts comprise a pair of uniramous mandibles and no maxillipeds.
The trunk consists of three regions; thorax I, thorax II and the abdomen. Thorax I is made up of 11 segments, each with a pair of well-developed limbs and the genital opening on the eleventh segment. In the female, it is modified to form a "brood pouch". The first one or two pairs of legs differ from the remainder, and probably function as sensory organs.
The somites on thorax II are fused into "rings", which varies in number between species and gender and appear to be body segments, but do not always reflect the underlying segmentation. Each ring is made up of 2–6 complete or partial fused segments, and the number of legs on each body ring match its number of segments. The legs become progressively smaller posteriorly, with the last segments being legless.
The limbless abdomen ends in a telson and a pair of long, thin, multi-articulate caudal rami. The form of the telson varies between the two genera: in Lepidurus, a rounded projection extends between the caudal rami, while in Triops there is no such projection.
Life cycle
Within the Notostraca, and even within species, there is variation in the mode of reproduction, with some populations reproducing sexually, some showing self-fertilisation of females, and some showing a mix of the two. The frequency of males in populations is therefore highly variable. In sexual populations, the sperm leave the male's body through simple pores, there being no penis. The eggs are released by the female and then held in the cup-like brood pouch. The eggs are retained by the female only for a short time before being laid, and the larvae develop directly, without passing through a metamorphosis.
Ecology and distribution
Notostracans are omnivorous, eating small animals such as fishes and fairy shrimp. They are found worldwide in freshwater, brackish water, or saline pools, as well as in shallow lakes, peat bogs, and moorland. The species Triops longicaudatus is considered an agricultural pest in California rice paddies, because it prevents light from reaching the rice seedlings by stirring up sediment.
Evolution and fossil record
The fossil record of Notostraca is extensive, occurring in a wide range of geological deposits. The oldest known notostracan is the species Strudops goldenbergi from the Late Devonian (Famennian ~ 365 million years ago) of Belgium. The lack of major morphological change since has led to Notostraca being described as living fossils. Kazacharthra, a group known only from Triassic and Jurassic fossils from Kazakhstan and Western China, are closely related to notostracans, and may belong within the order Notostraca, or alternatively are placed as their sister group within the clade Calmanostraca.
The "central autapomorphy" of the Notostraca is the abandonment of filter feeding in open water, and the development of a benthic lifestyle in muddy waters, taking up food from particles of sediment and preying on small animals. A number of other characteristics are correlated with this change, including the increased size of the animal compared to its relatives, and the loss of the ability to hinge the carapace; although a central keel marks the former separation into two valves, the adductor muscle is missing. Notostracans retain the plesiomorphic condition of having two separate compound eyes, which abut, but have not become united, as seen in other groups of Branchiopoda.
Taxonomy
The extant members of order Notostraca composed a single family, Triopsidae, with only two genera, Triops and Lepidurus.
The problematic Middle Ordovician fossil Douglasocaris has been erected and placed in its own family Douglasocaridae by Caster & Brooks 1956, and may be ancestral to Notostraca.
The phenotypic plasticity shown by notostracan species make identification to the species level difficult. Many putative species have been described based on morphological variation, such that by the 1950s, as many as 70 species were recognised. Two important revisions – those of Linder in 1952 and Longhurst in 1955 – synonymised many taxa, and resulted in the recognition of only 11 species in the two genera. This taxonomy was accepted for decades, "even attaining the status of dogma". More recent studies, especially those employing molecular phylogenetics, have shown that the eleven currently recognised species conceal a greater number of reproductively isolated populations.
Genera list
Apudites, (Formerly "Notostraca" minor, often referred to as Triops cancriformis minor, or "Triops" minor in historic literature) Lower Triassic, Grès à Voltzia, Vosges Mountains, France; Hassberge Formation, Germany, Late Triassic (Carnian)
Brachygastriops Dabeigou Formation, China, Late Jurassic or Early Cretaceous
Chenops Yixian Formation, China, Early Cretaceous (Aptian)
Dikelocephala, Lower Triassic of North China
Discocephala, Lower Triassic of North China
Heidiops, Lower Permian of the Lodève Basin, France
Jeholops Yixian Formation, China, Early Cretaceous (Aptian)
Lynceites Germany, Canada, Carboniferous
Prolepidurus, Late Jurassic?-Lower Cretaceous, Transbaikal, Russia
Strudops Strud locality, Belgium, late Devonian (Fammenian)
Thuringiops, Upper Oberhof Formation, Thuringian Forest Basin, Germany
Weichangiops Dabeigou Formation, China, Late Jurassic or Early Cretaceous
Xinjiangiops Kelamayi Formation, China, Middle Triassic
Incertae sedis species
"Notostraca" oleseni Yixian Formation, China, Early Cretaceous (Aptian)
"Calmanostraca" hassbergella Hassberge Formation, Germany, Late Triassic (Carnian)
| Biology and health sciences | Crustaceans | Animals |
1104705 | https://en.wikipedia.org/wiki/Calvin%20cycle | Calvin cycle | The Calvin cycle, light-independent reactions, bio synthetic phase, dark reactions, or photosynthetic carbon reduction (PCR) cycle of photosynthesis is a series of chemical reactions that convert carbon dioxide and hydrogen-carrier compounds into glucose. The Calvin cycle is present in all photosynthetic eukaryotes and also many photosynthetic bacteria. In plants, these reactions occur in the stroma, the fluid-filled region of a chloroplast outside the thylakoid membranes. These reactions take the products (ATP and NADPH) of light-dependent reactions and perform further chemical processes on them. The Calvin cycle uses the chemical energy of ATP and the reducing power of NADPH from the light-dependent reactions to produce sugars for the plant to use. These substrates are used in a series of reduction-oxidation (redox) reactions to produce sugars in a step-wise process; there is no direct reaction that converts several molecules of to a sugar. There are three phases to the light-independent reactions, collectively called the Calvin cycle: carboxylation, reduction reactions, and ribulose 1,5-bisphosphate (RuBP) regeneration.
Though it is also called the "dark reaction", the Calvin cycle does not occur in the dark or during nighttime. This is because the process requires NADPH, which is short-lived and comes from light-dependent reactions. In the dark, plants instead release sucrose into the phloem from their starch reserves to provide energy for the plant. The Calvin cycle thus happens when light is available independent of the kind of photosynthesis (C3 carbon fixation, C4 carbon fixation, and crassulacean acid metabolism (CAM)); CAM plants store malic acid in their vacuoles every night and release it by day to make this process work.
Coupling to other metabolic pathways
The reactions of the Calvin cycle are closely coupled to the thylakoid electron transport chain, as the energy required to reduce the carbon dioxide is provided by NADPH produced during the light dependent reactions. The process of photorespiration, also known as C2 cycle, is also coupled to the Calvin cycle, as it results from an alternative reaction of the RuBisCO enzyme, and its final byproduct is another glyceraldehyde-3-P molecule.
Calvin cycle
The Calvin cycle, Calvin–Benson–Bassham (CBB) cycle, reductive pentose phosphate cycle (RPP cycle) or C3 cycle is a series of biochemical redox reactions that take place in the stroma of chloroplast in photosynthetic organisms. The cycle was discovered in 1950 by Melvin Calvin, James Bassham, and Andrew Benson at the University of California, Berkeley by using the radioactive isotope carbon-14.
Photosynthesis occurs in two stages in a cell. In the first stage, light-dependent reactions capture the energy of light and use it to make the energy-storage molecule ATP and the moderate-energy hydrogen carrier NADPH. The Calvin cycle uses these compounds to convert carbon dioxide and water into organic compounds that can be used by the organism (and by animals that feed on it). This set of reactions is also called carbon fixation. The key enzyme of the cycle is called RuBisCO. In the following biochemical equations, the chemical species (phosphates and carboxylic acids) exist in equilibria among their various ionized states as governed by the pH.
The enzymes in the Calvin cycle are functionally equivalent to most enzymes used in other metabolic pathways such as gluconeogenesis and the pentose phosphate pathway, but the enzymes in the Calvin cycle are found in the chloroplast stroma instead of the cell cytosol, separating the reactions. They are activated in the light (which is why the name "dark reaction" is misleading), and also by products of the light-dependent reaction. These regulatory functions prevent the Calvin cycle from being respired to carbon dioxide. Energy (in the form of ATP) would be wasted in carrying out these reactions when they have no net productivity.
The sum of reactions in the Calvin cycle is the following:
3 + 6 NADPH + 9 ATP + 5 → glyceraldehyde-3-phosphate (G3P) + 6 NADP+ + 9 ADP + 8 Pi (Pi = inorganic phosphate)
Hexose (six-carbon) sugars are not products of the Calvin cycle. Although many texts list a product of photosynthesis as , this is mainly for convenience to match the equation of aerobic respiration, where six-carbon sugars are oxidized in mitochondria. The carbohydrate products of the Calvin cycle are three-carbon sugar phosphate molecules, or "triose phosphates", namely, glyceraldehyde-3-phosphate (G3P).
Steps
In the first stage of the Calvin cycle, a molecule is incorporated into one of two three-carbon molecules (glyceraldehyde 3-phosphate or G3P), where it uses up two molecules of ATP and two molecules of NADPH, which had been produced in the light-dependent stage. The three steps involved are:
The enzyme RuBisCO catalyses the carboxylation of ribulose-1,5-bisphosphate, RuBP, a 5-carbon compound, by carbon dioxide (a total of 6 carbons) in a two-step reaction. The product of the first step is enediol-enzyme complex that can capture or . Thus, enediol-enzyme complex is the real carboxylase/oxygenase. The that is captured by enediol in second step produces an unstable six-carbon compound called 2-carboxy 3-keto 1,5-biphosphoribotol (CKABP) (or 3-keto-2-carboxyarabinitol 1,5-bisphosphate) that immediately splits into 2 molecules of 3-phosphoglycerate (also written as 3-phosphoglyceric acid, PGA, 3PGA, or 3-PGA), a 3-carbon compound.
The enzyme phosphoglycerate kinase catalyses the phosphorylation of 3-PGA by ATP (which was produced in the light-dependent stage). 1,3-Bisphosphoglycerate (glycerate-1,3-bisphosphate) and ADP are the products. (However, note that two 3-PGAs are produced for every that enters the cycle, so this step utilizes two ATP per fixed.)
The enzyme glyceraldehyde 3-phosphate dehydrogenase catalyses the reduction of 1,3BPGA by NADPH (which is another product of the light-dependent stage). Glyceraldehyde 3-phosphate (also called G3P, GP, TP, PGAL, GAP) is produced, and the NADPH itself is oxidized and becomes NADP+. Again, two NADPH are utilized per fixed.
The next stage in the Calvin cycle is to regenerate RuBP. Five G3P molecules produce three RuBP molecules, using up three molecules of ATP. Since each molecule produces two G3P molecules, three molecules produce six G3P molecules, of which five are used to regenerate RuBP, leaving a net gain of one G3P molecule per three molecules (as would be expected from the number of carbon atoms involved).
The regeneration stage can be broken down into a series of steps.
Triose phosphate isomerase converts one of the G3P reversibly into dihydroxyacetone phosphate (DHAP), also a 3-carbon molecule.
Aldolase and fructose-1,6-bisphosphatase convert a G3P and a DHAP into fructose 6-phosphate (6C). A phosphate ion is lost into solution.
Then fixation of another generates two more G3P.
F6P has two carbons removed by transketolase, giving erythrose-4-phosphate (E4P). The two carbons on transketolase are added to a G3P, giving the ketose xylulose-5-phosphate (Xu5P).
E4P and a DHAP (formed from one of the G3P from the second fixation) are converted into sedoheptulose-1,7-bisphosphate (7C) by aldolase enzyme.
Sedoheptulose-1,7-bisphosphatase (one of only three enzymes of the Calvin cycle that are unique to plants) cleaves sedoheptulose-1,7-bisphosphate into sedoheptulose-7-phosphate, releasing an inorganic phosphate ion into solution.
Fixation of a third generates two more G3P. The ketose S7P has two carbons removed by transketolase, giving ribose-5-phosphate (R5P), and the two carbons remaining on transketolase are transferred to one of the G3P, giving another Xu5P. This leaves one G3P as the product of fixation of 3 , with generation of three pentoses that can be converted to Ru5P.
R5P is converted into ribulose-5-phosphate (Ru5P, RuP) by phosphopentose isomerase. Xu5P is converted into RuP by phosphopentose epimerase.
Finally, phosphoribulokinase (another plant-unique enzyme of the pathway) phosphorylates RuP into RuBP, ribulose-1,5-bisphosphate, completing the Calvin cycle. This requires the input of one ATP.
Thus, of six G3P produced, five are used to make three RuBP (5C) molecules (totaling 15 carbons), with only one G3P available for subsequent conversion to hexose. This requires nine ATP molecules and six NADPH molecules per three molecules. The equation of the overall Calvin cycle is shown diagrammatically below.
RuBisCO also reacts competitively with instead of in photorespiration. The rate of photorespiration is higher at high temperatures. Photorespiration turns RuBP into 3-PGA and 2-phosphoglycolate, a 2-carbon molecule that can be converted via glycolate and glyoxalate to glycine. Via the glycine cleavage system and tetrahydrofolate, two glycines are converted into serine plus . Serine can be converted back to 3-phosphoglycerate. Thus, only 3 of 4 carbons from two phosphoglycolates can be converted back to 3-PGA. It can be seen that photorespiration has very negative consequences for the plant, because, rather than fixing , this process leads to loss of . C4 carbon fixation evolved to circumvent photorespiration, but can occur only in certain plants native to very warm or tropical climates—corn, for example. Furthermore, RuBisCOs catalyzing the light-independent reactions of photosynthesis generally exhibit an improved specificity for CO2 relative to O2, in order to minimize the oxygenation reaction. This improved specificity evolved after RuBisCO incorporated a new protein subunit.
Products
The immediate products of one turn of the Calvin cycle are 2 glyceraldehyde-3-phosphate (G3P) molecules, 3 ADP, and 2 NADP+. (ADP and NADP+ are not really "products". They are regenerated and later used again in the light-dependent reactions). Each G3P molecule is composed of 3 carbons. For the Calvin cycle to continue, RuBP (ribulose 1,5-bisphosphate) must be regenerated. So, 5 out of 6 carbons from the 2 G3P molecules are used for this purpose. Therefore, there is only 1 net carbon produced to play with for each turn. To create 1 surplus G3P requires 3 carbons, and therefore 3 turns of the Calvin cycle. To make one glucose molecule (which can be created from 2 G3P molecules) would require 6 turns of the Calvin cycle. Surplus G3P can also be used to form other carbohydrates such as starch, sucrose, and cellulose, depending on what the plant needs.
Light-dependent regulation
These reactions do not occur in the dark or at night. There is a light-dependent regulation of the cycle enzymes, as the third step requires NADPH.
There are two regulation systems at work when the cycle must be turned on or off: the thioredoxin/ferredoxin activation system, which activates some of the cycle enzymes; and the RuBisCo enzyme activation, active in the Calvin cycle, which involves its own activase.
The thioredoxin/ferredoxin system activates the enzymes glyceraldehyde-3-P dehydrogenase, glyceraldehyde-3-P phosphatase, fructose-1,6-bisphosphatase, sedoheptulose-1,7-bisphosphatase, and ribulose-5-phosphatase kinase, which are key points of the process. This happens when light is available, as the ferredoxin protein is reduced in the photosystem I complex of the thylakoid electron chain when electrons are circulating through it. Ferredoxin then binds to and reduces the thioredoxin protein, which activates the cycle enzymes by severing a cystine bond found in all these enzymes. This is a dynamic process as the same bond is formed again by other proteins that deactivate the enzymes. The implications of this process are that the enzymes remain mostly activated by day and are deactivated in the dark when there is no more reduced ferredoxin available.
The enzyme RuBisCo has its own, more complex activation process. It requires that a specific lysine amino acid be carbamylated to activate the enzyme. This lysine binds to RuBP and leads to a non-functional state if left uncarbamylated. A specific activase enzyme, called RuBisCo activase, helps this carbamylation process by removing one proton from the lysine and making the binding of the carbon dioxide molecule possible. Even then the RuBisCo enzyme is not yet functional, as it needs a magnesium ion bound to the lysine to function. This magnesium ion is released from the thylakoid lumen when the inner pH drops due to the active pumping of protons from the electron flow. RuBisCo activase itself is activated by increased concentrations of ATP in the stroma caused by its phosphorylation.
| Biology and health sciences | Metabolic processes | Biology |
1104822 | https://en.wikipedia.org/wiki/Ceratosaurus | Ceratosaurus | Ceratosaurus (from Greek 'horn' and 'lizard') was a carnivorous theropod dinosaur that lived in the Late Jurassic period (Kimmeridgian to Tithonian ages). The genus was first described in 1884 by American paleontologist Othniel Charles Marsh based on a nearly complete skeleton discovered in Garden Park, Colorado, in rocks belonging to the Morrison Formation. The type species is Ceratosaurus nasicornis.
The Garden Park specimen remains the most complete skeleton known from the genus and only a handful of additional specimens have been described since. Two additional species, Ceratosaurus dentisulcatus and Ceratosaurus magnicornis, were described in 2000 from two fragmentary skeletons from the Cleveland-Lloyd Quarry of Utah and from the vicinity of Fruita, Colorado. The validity of these additional species has been questioned, however, and all three skeletons possibly represent different growth stages of the same species. In 1999, the discovery of the first juvenile specimen was reported. In 2000, a partial specimen was excavated and described from the Lourinhã Formation of Portugal, providing evidence for the presence of the genus outside of North America. Fragmentary remains have also been reported from Tanzania, Uruguay, and Switzerland, although their assignment to Ceratosaurus is currently not accepted by most paleontologists.
Ceratosaurus was a medium-sized theropod. The original specimen is estimated to be or long, while the specimen described as C. dentisulcatus was larger, at around long. Ceratosaurus was characterized by deep jaws that supported proportionally very long, blade-like teeth, a prominent, ridge-like horn on the midline of the snout, and a pair of horns over the eyes. The forelimbs were very short, but remained fully functional. The hand had four fingers with claws on the first three. The tail was deep from top to bottom. A row of small osteoderms (skin bones) was present down the middle of the neck, back, and tail. Additional osteoderms were present at unknown positions on the animal's body.
Ceratosaurus gives its name to Ceratosauria, a clade of theropod dinosaurs that diverged early on from the evolutionary lineage leading to modern birds. Within Ceratosauria, some paleontologists proposed it to be most closely related to Genyodectes from Argentina, which shares the strongly elongated teeth. The geologically older genus Proceratosaurus from England, although originally described as a presumed antecedent of Ceratosaurus, was later found to be an early tyrannosauroid. Ceratosaurus shared its habitat with other large theropod genera, including Torvosaurus and Allosaurus, and it has been suggested that these theropods occupied different ecological niches to reduce competition. Ceratosaurus may have preyed upon plant-eating dinosaurs, although some paleontologists suggested that it hunted aquatic prey such as fish. The nasal horn was probably not used as a weapon as was originally suggested by Marsh, but more likely was used solely for display.
History of discovery
Holotype specimen of C. nasicornis
The first specimen, holotype USNM 4735, was discovered and excavated by farmer Marshall Parker Felch in 1883 and 1884. Found in articulation, with the bones still connected to each other, it was nearly complete, including the skull. Significant missing parts include an unknown number of vertebrae, all but the last ribs of the trunk, the humeri (upper arm bones), the distal finger bones of both hands, most of the right arm, most of the left leg, and most of the feet. The specimen was found encased in hard sandstone, leading to the skull and spine being heavily distorted during fossilization. The site of discovery, located in the Garden Park area north of Cañon City, Colorado, and known as the Felch Quarry 1, is regarded as one of the richest fossil sites of the Morrison Formation. Numerous dinosaur fossils had been recovered from this quarry even before the discovery of Ceratosaurus, most notably a nearly complete specimen of Allosaurus (USNM 4734) in 1883 and 1884.
After excavation, the specimen was shipped to the Peabody Museum of Natural History in New Haven, where it was studied by Marsh, who described it as the new genus and species Ceratosaurus nasicornis in 1884. The name Ceratosaurus may be translated as "horn lizard" (from the Greek words , —"horn" and /—"lizard") and nasicornis with "nose horn" (from the Latin words nasus—"nose" and cornu—"horn"). Given the completeness of the specimen, the newly described genus was, at the time, the best-known theropod discovered in America. In 1898 and 1899, the specimen was transferred to the National Museum of Natural History in Washington, DC, along with many other fossils originally described by Marsh. Only part of this material was fully prepared when it arrived in Washington. Subsequent preparation lasted from 1911 to the end of 1918. Packaging and shipment from New Haven to Washington caused some damage to the Ceratosaurus specimen. In 1920, Charles Gilmore published an extensive redescription of this and the other theropod specimens received from New Haven, including the nearly complete Allosaurus specimen recovered from the same quarry.
In an 1892 paper, Marsh published the first skeletal reconstruction of Ceratosaurus, which depicts the animal at in length and in height. As noted by Gilmore in 1920, the trunk was depicted much too long in this reconstruction, incorporating at least six dorsal vertebrae too many. This error was repeated in several subsequent publications, including the first life reconstruction, which was drawn in 1899 by Frank Bond under the guidance of Charles R. Knight, but not published until 1920. A more accurate life reconstruction, published in 1901, was produced by Joseph M. Gleeson, again under Knight's supervision. The holotype was mounted by Gilmore in 1910 and 1911. Since then, it was exhibited at the National Museum of Natural History. Most early reconstructions show Ceratosaurus in an upright posture, with the tail dragging on the ground. Gilmore's mount of the holotype, in contrast, was very ahead of its time. Inspired by the upper thigh bones, which were found angled against the lower leg, he depicted the mount as a running animal with a horizontal posture and a tail that did not make contact with the ground. Because of the strong flattening of the fossils, Gilmore mounted the specimen, not as a free-standing skeleton, but as a bas-relief within an artificial wall. With the bones being partly embedded in a plaque, scientific access was limited. In the course of the renovation of the museum's dinosaur exhibition between 2014 and 2019, the specimen was dismantled and freed from the encasing plaque. In the new exhibition, which was set to open in 2019, the mount was planned to be replaced by a free-standing cast and the original bones were to be stored in the museum collection to allow full access for scientists.
Additional finds in North America
After the discovery of the holotype of C. nasicornis, a significant Ceratosaurus find was not made until the early 1960s, when paleontologist James Madsen and his team unearthed a fragmentary, disarticulated skeleton including the skull (UMNH VP 5278) in the Cleveland-Lloyd Dinosaur Quarry of Utah. This find represents one of the largest-known Ceratosaurus specimens. A second, articulated specimen including the skull (MWC 1) was discovered in 1976 by Thor Erikson, the son of paleontologist Lance Erikson, near Fruita, Colorado. A fairly complete specimen, it lacks lower jaws, forearms, and gastralia. The skull, although reasonably complete, was found disarticulated and is strongly flattened sideways. Although it was a large individual, it had not yet reached adult size, as indicated by unfused sutures between the skull bones. Scientifically accurate three-dimensional reconstructions of the skull for use in museum exhibits were produced using a complicated process including molding and casting of the individual original bones, correction of deformities, reconstruction of missing parts, assembly of the bone casts into their proper position, and painting to match the original color of the bones.
Both the Fruita and Cleveland-Lloyd specimens were described by Madsen and Samuel Paul Welles in a 2000 monograph, with the Utah specimen being assigned to the new species C. dentisulcatus and the Colorado specimen being assigned to the new species C. magnicornis. The name dentisulcatus refers to the parallel grooves present on the inner sides of the premaxillary teeth and the first three teeth of the lower jaw in that specimen. Magnicornis points to the larger nasal horn. The validity of both species, however, was questioned in subsequent publications. Brooks Britt and colleagues, in 2000, claimed that the C. nasicornis holotype was in fact a juvenile individual, with the two larger species representing the adult state of a single species. Oliver Rauhut, in 2003, and Matthew Carrano and Scott Sampson, in 2008, considered the anatomical differences cited by Madsen and Welles to support these additional species to represent ontogenetic (age-related) or individual variation.
A further specimen (BYUVP 12893) was discovered in 1992 in the Agate Basin Quarry southeast of Moore, Utah, but still awaits description. The specimen, considered the largest known from the genus, includes the front half of a skull, seven fragmentary pelvic dorsal vertebrae, and an articulated pelvis and sacrum. In 1999, Britt reported the discovery of a Ceratosaurus skeleton belonging to a juvenile individual. Discovered in Bone Cabin Quarry in Wyoming, it is 34% smaller than the C. nasicornis holotype and consists of a complete skull as well as 30% of the remainder of the skeleton including a complete pelvis.
Besides these five skeletal finds, fragmentary Ceratosaurus remains have been reported from various localities from stratigraphic zones 2 and 4-6 of the Morrison Formation, including some of the major fossil sites of the formation. Dinosaur National Monument, Utah, yielded an isolated right premaxilla (DNM 972). A large shoulder blade (scapulocoracoid) was reported from Como Bluff in Wyoming. Another specimen stems from the Dry Mesa Quarry of Colorado and includes a left scapulocoracoid, as well as fragments of vertebrae and limb bones. In Mygatt Moore Quarry, Colorado, the genus is known from teeth.
Finds outside North America
From 1909 to 1913, German expeditions of the Berlin Museum für Naturkunde uncovered a diverse dinosaur fauna from the Tendaguru Formation in German East Africa, in what is now Tanzania. Although commonly considered the most important African dinosaur locality, large theropod dinosaurs are only known through few and very fragmentary remains. In 1920, German paleontologist Werner Janensch assigned several dorsal vertebrae from the quarry "TL" to Ceratosaurus, as Ceratosaurus sp. (of uncertain species). In 1925, Janensch named a new species of Ceratosaurus, C. roechlingi, based on fragmentary remains from the quarry "Mw" encompassing a quadrate bone, a fibula, fragmentary caudal vertebrae, and other fragments. This specimen stems from an individual substantially larger than the C. nasicornis holotype.
In their 2000 monograph, Madsen and Welles confirmed the assignment of these finds to Ceratosaurus. In addition, they ascribed several teeth to the genus, which had originally been described by Janensch as a possible species of Labrosaurus, Labrosaurus (?) stechowi. Other authors questioned the assignment of any of the Tendaguru finds to Ceratosaurus, noting that none of these specimens displays features diagnostic for that genus. In 2011, Rauhut found both C. roechlingi and Labrosaurus (?) stechowi to be possible ceratosaurids, but found them to be undiagnostic at genus level and designated them as nomina dubia (doubtful names). In 1990, Timothy Rowe and Jacques Gauthier mentioned yet another Ceratosaurus species from Tendaguru, Ceratosaurus ingens, which purportedly was erected by Janensch in 1920 and was based on 25 isolated, very large teeth up to in length. However, Janensch assigned this species to Megalosaurus, not to Ceratosaurus. Therefore, this name might be a simple copying error. Rauhut, in 2011, showed that Megalosaurus ingens was not closely related to either Megalosaurus or Ceratosaurus, but possibly represents a carcharodontosaurid instead.
In 2000 and 2006, paleontologists led by Octávio Mateus described a find from the Lourinhã Formation of central-west Portugal (ML 352) as a new specimen of Ceratosaurus, consisting of a right femur (upper thigh bone), a left tibia (shin bone), and several isolated teeth recovered from the cliffs of Valmitão beach, between the municipalities of Lourinhã and Torres Vedras. The bones were found embedded in yellow to brown, fine-grained sandstones, which were deposited by rivers as floodplain deposits and belong to the lower levels of the Porto Novo Member, which is thought to be late Kimmeridgian in age. Additional bones of this individual (SHN (JJS)-65), including a left femur, a right tibia, and a partial left fibula (calf bone), were since exposed due to progressing cliff erosion. Although initially part of a private collection, these additional elements became officially curated after the private collection was donated to the Sociedade de História Natural in Torres Vedras and were described in detail in 2015. The specimen was ascribed to the species Ceratosaurus dentisulcatus by Mateus and colleagues in 2006. A 2008 review by Carrano and Sampson confirmed the assignment to Ceratosaurus, but concluded that the assignment to any specific species is not possible at present. In 2015, Elisabete Malafaia and colleagues, who questioned the validity of C. dentisulcatus, assigned the specimen to Ceratosaurus aff. Ceratosaurus nasicornis.
Other reports include a single tooth found in Moutier, Switzerland. Originally named by Janensch in 1920 as Labrosaurus meriani, the tooth was later assigned Ceratosaurus sp. (of unknown species) by Madsen and Welles. In 2008, Matías Soto and Daniel Perea described teeth from the Tacuarembó Formation in Uruguay, including a presumed premaxillary tooth crown. This shows vertical striations on its inner side and lacks denticles on its front edge. These features are, in this combination, only known from Ceratosaurus. The authors, however, stressed that an assignment to Ceratosaurus is infeasible because the remains are scant and note that the assignment of the European and African material to Ceratosaurus has to be viewed with caution. In 2020, Soto and colleagues described additional Ceratosaurus teeth from the same formation that further support their earlier interpretation.
Description
Ceratosaurus followed the body plan typical for large theropod dinosaurs. As a biped, it moved on powerful legs, while its arms were reduced in size. Specimen USNM 4735, the first discovered skeleton and holotype of Ceratosaurus nasicornis, was an individual or long according to separate sources. Whether this animal was fully grown is unclear. Othniel Charles Marsh, in 1884, suggested that this specimen weighed about half as much as the contemporary Allosaurus. In more recent accounts, this was revised to , , or . Three additional skeletons discovered in the latter half of the 20th century were substantially larger. The first of these, UMNH VP 5278, was estimated by James Madsen to have been around long, but was later estimated at long. Its weight was calculated at , , and in separate works. The second skeleton, MWC 1, was somewhat smaller than UMNH VP 5278 and might have weighed . The third, yet undescribed, specimen BYUVP 12893 was claimed to be the largest yet discovered, although estimates have not been published. Another specimen, ML 352, discovered in Portugal in 2000, was estimated at in length and .
Skull
The skull was quite large in proportion to the rest of its body. It measures in length in the C. nasicornis holotype, measured from the tip of the snout to the , which connects to the first cervical vertebra. The width of this skull is difficult to reconstruct, as it is heavily distorted, and Gilmore's 1920 reconstruction was later found to be too wide. The fairly complete skull of specimen MWC 1 was estimated to have been long and wide. This skull was somewhat more elongated than that of the holotype. The back of the skull was more lightly built than in some other larger theropods due to extensive skull openings, yet the jaws were deep to support the proportionally large teeth. The lacrimal bone formed not only the back margin of the antorbital fenestra, a large opening between eye and , but also part of its upper margin, unlike in members of the related Abelisauridae. The quadrate bone, which was connected to the lower jaw at its bottom end to form the jaw joint, was inclined so that the jaw joint was displaced backwards in relation to the occipital condyle. This also led to a broadening of the base of the lateral temporal fenestra, a large opening behind the eyes.
The most distinctive feature was a prominent horn situated on the skull midline behind the bony nostrils, which was formed from fused protuberances of the left and right nasal bones. Only the bony horn core is known from fossils. In the living animal, this core would have supported a keratinous sheath. While the base of the horn core was smooth, its upper two-thirds were wrinkled and lined with grooves that would have contained blood vessels when alive. In the holotype, the horn core is long and wide at its base, but quickly narrows to only further up, and is in height. It is longer and lower in the skull of MWC 1. In the living animal, the horn would likely have been more elongated due to its keratinous sheath. Behind the nasal horn, the nasal bones formed an ovalur groove. Both this groove and the nasal horn serve as features to distinguish Ceratosaurus from related genera. In addition to the large nasal horn, Ceratosaurus possessed smaller, semicircular, bony ridges in front of each eye, similar to those of Allosaurus. These ridges were formed by the lacrimal bones. In juveniles, all three horns were smaller than in adults and the two halves of the nasal horn core were not yet fused.
The premaxillary bones, which formed the tip of the snout, contained merely three teeth on each side, less than in most other theropods. The of the upper jaw were lined with 15 blade-like teeth on each side in the holotype. The first eight of these teeth were very long and robust, but from the ninth tooth onward, they gradually decrease in size. As is typical for theropods, they featured finely edges, which contained some 10 denticles per in the holotype. Specimen MWC 1 merely showed 11 to 12 and specimen UMNH VP 5278 showed 12 teeth in each maxilla. The teeth were more robust and more recurved in the latter specimen. In all specimens, the tooth crowns of the upper jaws were exceptionally long. In specimen UMNH VP 5278, they measured up to long, which is equal to the minimum height of the lower jaw. In the holotype, they are in length, which even surpasses the minimum height of the lower jaw. In other theropods, a comparable tooth length is only known from the possibly closely related Genyodectes. In contrast, several members of Abelisauridae feature very short tooth crowns. In the holotype, each half of the , the tooth-bearing bone of the , was equipped with 15 teeth, which are, however, poorly preserved. Both specimens MWC 1 and UMNH VP 5278 show only 11 teeth in each dentary, which were, as shown by the latter specimen, slightly straighter and less sturdy than those of the upper jaw.
Postcranial skeleton
The exact number of vertebrae is unknown due to several gaps in the spine of the Ceratosaurus nasicornis holotype. At least 20 vertebrae formed the neck and back in front of the sacrum. In the middle portion of the neck, the (bodies) of the vertebrae were as long as they were tall, while in the front and rear portions of the neck, the centra were shorter than their height. The upwards projecting were comparatively large and, in the dorsal (back) vertebrae, were as tall as the vertebral centra were long. The sacrum, consisting of six fused , was arched upwards, with its vertebral centra strongly reduced in height in its middle portion, as is the case in some other ceratosaurians. The tail comprised around 50 and was about half of the animal's total length. In the holotype, it was estimated at . The tail was deep from top to bottom due to its high neural spines and elongated chevrons, bones located below the vertebral centra. As in other dinosaurs, it counterbalanced the body and contained the massive caudofemoralis muscle, which was responsible for forward thrust during locomotion, pulling the upper thigh backwards when contracted.
The scapula (shoulder blade) was fused with the coracoid, forming a single bone without any visible demarcation between the two original elements. The C. nasicornis holotype was found with an articulated left arm including an incomplete hand. Although during preparation, a cast had been made of the fossil beforehand to document the original relative positions of the bones. Carpal bones were not known from any specimen, leading some authors to suggest that they were lost in the genus. In a 2016 paper, Matthew Carrano and Jonah Choiniere suggested that one or more cartilaginous (not bony) carpals were probably present, as indicated by a gap present between the forearm bones and the metacarpals, as well as by the surface texture within this gap seen in the cast. In contrast to most more-derived theropods, which showed only three digits on each hand (digits I–III), Ceratosaurus retained four digits, with digit IV being reduced in size. The first and fourth metacarpals were short, while the second was slightly longer than the third. The metacarpus and especially the first phalanges were proportionally very short, unlike in most other basal theropods. Only the first phalanges of digits II, III, and IV are preserved in the holotype. The total number of phalanges and unguals (claw bones) is unknown. The anatomy of metacarpal I indicates that phalanges had originally been present on this digit as well. The pes (foot) consisted of three weight-bearing digits, numbered II–IV. Digit I, which in theropods is usually reduced to a dewclaw that does not touch the ground, is not preserved in the holotype. Marsh, in his original 1884 description, assumed that this digit was lost in Ceratosaurus, but Charles Gilmore, in his 1920 monograph, noted an attachment area on the second metatarsal demonstrating the presence of this digit.
Uniquely among theropods, Ceratosaurus possessed small, elongated, and irregularly formed osteoderms (skin bones) along the midline of its body. Such osteoderms have been found above the neural spines of cervical vertebrae 4 and 5, as well as caudal vertebrae 4 to 10, and probably formed a continuous row that might have extended from the base of the skull to most of the tail. As suggested by Gilmore in 1920, their position in the rock matrix likely reflects their exact position in the living animal. The osteoderms above the tail were found separated from the neural spines by to , possibly accounting for skin and muscles present in between, while those of the neck were much closer to the neural spines. Apart from the body midline, the skin contained additional osteoderms, as indicated by a by large, roughly quadrangular plate found together with the holotype. The position of this plate on the body, however, is unknown. Specimen UMNH VP 5278 was also found with a number of osteoderms, which have been described as amorphous in shape. Although most of these were found at most 5 m apart from the skeleton, they were not directly associated with any vertebrae, unlike in the C. nasicornis holotype, so their original position on the body cannot be inferred from this specimen.
Classification
In his original description of the Ceratosaurus nasicornis holotype and subsequent publications, Marsh noted a number of characteristics that were unknown in all other theropods known at the time. Two of these features, the fused pelvis and fused metatarsus, were known from modern-day birds and, according to Marsh, clearly demonstrate the close relationship between the latter and dinosaurs. To set the genus apart from Allosaurus, Megalosaurus, and coelurosaurs, Marsh made Ceratosaurus the only member of both a new family, Ceratosauridae, and a new infraorder, Ceratosauria. This was questioned in 1892 by Edward Drinker Cope, Marsh's archrival in the Bone Wars, who argued that distinctive features such as the nasal horn merely showed that C. nasicornis was a distinct species, but were insufficient to justify a distinct genus. Consequently, he assigned C. nasicornis to the genus Megalosaurus, creating the new combination Megalosaurus nasicornis.
Although Ceratosaurus was retained as a distinct genus in all subsequent analyses, its relationships remained controversial during the following century. Both Ceratosauridae and Ceratosauria were not widely accepted, with only few and poorly known additional members identified. Over the years, separate authors classified Ceratosaurus within Deinodontidae, Megalosauridae, Coelurosauria, Carnosauria, and Deinodontoidea. In his 1920 revision, Gilmore argued that the genus was the most basal theropod known from after the Triassic, being not that closely related to any other contemporary theropod known at that time. It thus warrants its own family: Ceratosauridae. It was not until the establishment of cladistic analysis in the 1980s, however, that Marsh's original claim of Ceratosauria as a distinct group gained ground. In 1985, the newly discovered South American genera Abelisaurus and Carnotaurus were found to be closely related to Ceratosaurus. Gauthier, in 1986, recognized Coelophysoidea to be closely related to Ceratosaurus, although this clade falls outside of Ceratosauria in most recent analyses. Many additional members of Ceratosauria have been recognized since then.
Ceratosauria split off early from the evolutionary line leading to modern birds and is considered basal within theropods. Ceratosauria itself contains a group of derived (nonbasal) members of the families Noasauridae and Abelisauridae, which are bracketed within the clade Abelisauroidea, as well as a number of basal members, such as Elaphrosaurus, Deltadromeus, and Ceratosaurus. The position of Ceratosaurus within basal ceratosaurians is under debate. Some analyses considered Ceratosaurus as the most derived of the basal members, forming the sister taxon of Abelisauroidea. Oliver Rauhut, in 2004, proposed Genyodectes as the sister taxon of Ceratosaurus, as both genera are characterized by exceptionally long teeth in the upper jaw. Rauhut grouped Ceratosaurus and Genyodectes within the family Ceratosauridae, which was followed by several later accounts.
Shuo Wang and colleagues, in 2017, concluded that Noasauridae were not nested within Abelisauroidea as was previously assumed, but instead were more basal than Ceratosaurus. Because noasaurids had been used as a fix point to define the clades Abelisauroidea and Abelisauridae, these clades would consequently include many more taxa per definition, including Ceratosaurus. In a subsequent 2018 study, Rafael Delcourt accepted these results, but pointed out that, as a consequence, Abelisauroidea would need to be replaced by the older synonym Ceratosauroidea, which was hitherto rarely used. For Abelisauridae, Delcourt proposed a new definition that excludes Ceratosaurus, allowing for using the name in its traditional sense. Wang and colleagues furthermore found that Ceratosaurus and Genyodectes form a clade with the Argentinian genus Eoabelisaurus. Delcourt used the name Ceratosauridae to refer to this same clade, and suggested to define Ceratosauridae as containing all taxa that are more closely related to Ceratosaurus than to the abelisaurid Carnotaurus.
The following cladogram showing the relationships of Ceratosaurus is based on the phylogenetic analysis conducted by Diego Pol and Oliver Rauhut in 2012:
A skull from the Middle Jurassic of England apparently displays a nasal horn similar to that of Ceratosaurus. In 1926, Friedrich von Huene described this skull as Proceratosaurus (meaning "before Ceratosaurus"), assuming that it was an antecedent of the Late Jurassic Ceratosaurus. Today, Proceratosaurus is considered a basal member of Tyrannosauroidea, a much more derived clade of theropod dinosaurs. The nasal horn would have had evolved independently in both genera. Oliver Rauhut and colleagues, in 2010, grouped Proceratosaurus within its own family, Proceratosauridae. These authors also noted that the nasal horn is incompletely preserved, opening the possibility that it represented the foremost portion of a more extensive head crest, as seen in some other proceratosaurids such as Guanlong.
Paleobiology
Ecology and feeding
Within the Morrison and Lourinhã Formation, Ceratosaurus fossils are frequently found in association with those of other large theropods, including the megalosaurid Torvosaurus and the allosaurid Allosaurus. The Garden Park locality in Colorado contained, besides Ceratosaurus, fossils attributed to Allosaurus. The Dry Mesa Quarry in Colorado, as well as the Cleveland-Lloyd Quarry and the Dinosaur National Monument in Utah, feature, respectively, the remains of at least three large theropods: Ceratosaurus, Allosaurus, and Torvosaurus. Likewise, Como Bluff and nearby localities in Wyoming contained remains of Ceratosaurus, Allosaurus, and at least one large megalosaurid. Ceratosaurus was a rare element of the theropod fauna, as it is outnumbered by Allosaurus at an average rate of 7.5 to 1 in sites where they co-occur.
Several studies attempted to explain how these sympatric species could have reduced direct competition. Donald Henderson, in 1998, argued that Ceratosaurus co-occurred with two separate potential species of Allosaurus, which he denoted as "morphs": a morph with a shortened snout, a high and wide skull, and short, backwards-projecting teeth, and a morph characterized by a longer snout, lower skull, and long, vertical teeth. Generally speaking, the greater the similarity between sympatric species regarding their morphology, physiology, and behavior, the more intense competition between these species will be. Henderson came to the conclusion that the short-snouted Allosaurus morph occupied a different ecological niche from both the long-snouted morph and Ceratosaurus. The shorter skull in this morph would have reduced bending moments occurring during biting, thus increased bite force, comparable to the condition seen in cats. Ceratosaurus and the other Allosaurus morph, though, had long-snouted skulls, which are better compared to those of dogs. The longer teeth would have been used as fangs to deliver quick, slashing bites, with the bite force concentrated at a smaller area due to the narrower skull. According to Henderson, the great similarities in skull shape between Ceratosaurus and the long-snouted Allosaurus morph indicate that these forms engaged in direct competition with each other. Therefore, Ceratosaurus might have been pushed out of habitats dominated by the long-snouted morph. Indeed, Ceratosaurus is very rare in the Cleveland-Lloyd Quarry, which contains the long-snouted Allosaurus morph, but appears to be more common in both Garden Park and the Dry Mesa Quarry, in which it co-occurs with the short-snouted morph.
Furthermore, Henderson suggested that Ceratosaurus could have avoided competition by preferring different prey items. The evolution of its extremely elongated teeth might have been a direct result of the competition with the long-snouted Allosaurus morph. Both species could also have preferred different parts of carcasses when acting as scavengers. The elongated teeth of Ceratosaurus could have served as visual signals facilitating the recognition of members of the same species or for other social functions. In addition, the large size of these theropods would have tended to decrease competition, as the number of possible prey items increases with size.
Foster and Daniel Chure, in a 2006 study, concurred with Henderson that Ceratosaurus and Allosaurus generally shared the same habitats and preyed upon the same types of prey, meaning they likely had different feeding strategies to avoid competition. According to these researchers, this is also evidenced by different proportions of the skull, teeth, and arms. The distinction between the two Allosaurus morphs, however, was questioned by some later studies. Kenneth Carpenter, in a 2010 study, found that short-snouted individuals of Allosaurus from the Cleveland-Lloyd Quarry represent cases of extreme individual variation rather than a separate taxon. Furthermore, the skull of USNM 4734 from the Garden Park locality, which formed the basis for Henderson's analysis of the short-snouted morph, was later found to have been reconstructed too short.
In a 2004 study, Robert Bakker and Gary Bir suggested that Ceratosaurus was primarily specialized in aquatic prey such as lungfish, crocodiles, and turtles. As indicated by a statistical analysis of shed teeth from 50 separate localities in and around Como Bluff, teeth of both Ceratosaurus and megalosaurids were most common in habitats in and around water sources such as wet floodplains, lake margins, and swamps. Ceratosaurus also occasionally occurred in terrestrial localities. Allosaurids, however, were equally common in terrestrial and aquatic habitats. From these results, Bakker and Bir concluded that Ceratosaurus and megalosaurids must have predominantly hunted near and within water bodies, with Ceratosaurus also feeding on carcasses of larger dinosaurs on occasion. The researchers furthermore noted the long, low, and flexible body of Ceratosaurus and megalosaurids. Compared to other Morrison theropods, Ceratosaurus showed taller neural spines on the foremost tail vertebrae, which were vertical rather than inclined towards the back. Together with the deep chevron bones on the underside of the tail, they indicate a deep, "crocodile-like" tail possibly adapted for swimming. On the contrary, allosaurids feature a shorter, taller, and stiffer body with longer legs. They would have been adapted for rapid running in open terrain and for preying upon large herbivorous dinosaurs such as sauropods and stegosaurs, but as speculated by Bakker and Bir, seasonally switched to aquatic prey items when the large herbivores were absent. However, this theory was challenged by Yun in 2019, suggesting Ceratosaurus was merely more capable of hunting aquatic prey than other theropods of the Morrison Formation as opposed to being fully semiaquatic.
In his 1986 popular book The Dinosaur Heresies, Bakker argued that the bones of the upper jaw were only loosely attached to the surrounding skull bones, allowing for some degree of movement within the skull, a condition termed cranial kinesis. Likewise, the bones of the lower jaw would have been able to move against each other and the quadrate bone could swing outwards, spreading the lower jaw at the jaw joint. Taken together, these features would have allowed the animal to widen its jaws in order to swallow larger food items. In a 2008 study, Casey Holliday and Lawrence Witmer re-evaluated similar claims made for other dinosaurs, concluding that the presence of muscle-powered cranial kinesis cannot be proven for any dinosaur species and was likely absent in most.
An Allosaurus pubic foot shows marks by the teeth of another theropod, probably Ceratosaurus or Torvosaurus. The location of the bone in the body (along the bottom margin of the torso and partially shielded by the legs) and the fact that it was among the most massive in the skeleton indicates that the Allosaurus was being scavenged. A bone assemblage in the Upper Jurassic Mygatt-Moore Quarry preserves an unusually high occurrence of theropod bite marks, most of which can be attributed to Allosaurus and Ceratosaurus, while others could have been made by a large allosaurid or Torvosaurus given the size of the striations. While the position of the bite marks on the herbivorous dinosaurs is consistent with predation or early access to remains, bite marks found on Allosaurus material suggest scavenging, either from the other theropods or from another Allosaurus. The unusually high concentration of theropod bite marks compared to other assemblages could be explained either by a more complete utilization of resources during a dry season by theropods or by a collecting bias in other localities.
Function of the nasal horn and osteoderms
In 1884, Marsh considered the nasal horn of Ceratosaurus to be a "most powerful weapon" for both offensive and defensive purposes and Gilmore, in 1920, concurred with this interpretation. The use of the horn as a weapon is now generally considered unlikely. In 1985, David Norman believed that the horn was "probably not for protection against other predators," but might instead have been used for intraspecific combat among male ceratosaurs contending for breeding rights. Gregory S. Paul, in 1988, suggested a similar function and illustrated two Ceratosaurus engaged in a nonlethal butting contest. In 1990, Rowe and Gauthier went further, suggesting that the nasal horn of Ceratosaurus was "probably used for display purposes alone" and played no role in physical confrontations. If used for display, the horn likely would have been brightly colored. A display function was also proposed for the row of osteoderms running down the body midline.
Forelimb function
The strongly shortened metacarpals and phalanges of Ceratosaurus raise the question as to whether the hand retained the grasping function assumed for other basal theropods. Within Ceratosauria, an even more extreme hand reduction can be observed in abelisaurids, where the arm lost its original function, and in Limusaurus. In a 2016 paper on the anatomy of the Ceratosaurus hand, Carrano and Jonah Choiniere stressed the great morphological similarity of the hand with those of other basal theropods, suggesting that it still fulfilled its original grasping function, despite its shortening. Although only the first phalanges are preserved, the second phalanges would have been mobile, as indicated by the well-developed articular surfaces, and the digits would likely have allowed a similar degree of motion as in other basal theropods. As in other theropods other than abelisaurids, the first digit would have been slightly turned in when flexed.
Brain and senses
A cast of the brain cavity of the holotype was made under Marsh's supervision, probably during preparation of the skull, allowing Marsh to conclude that the brain "was of medium size, but comparatively much larger than in the herbivorous dinosaurs". The skull bones, however, had been cemented together afterwards, so the accuracy of this cast could not be verified by later studies.
A second, well preserved braincase had been found with specimen MWC 1 in Fruita, Colorado, and was CT-scanned by paleontologists Kent Sanders and David Smith, allowing for reconstructions of the inner ear, gross regions of the brain, and cranial sinuses transporting blood away from the brain. In 2005, the researchers concluded that Ceratosaurus possessed a brain cavity typical for basal theropods and similar to that of Allosaurus. The impressions for the olfactory bulbs, which house the sense of smell, are well-preserved. While similar to those of Allosaurus, they were smaller than in Tyrannosaurus, which is thought to have been equipped with a very keen sense of smell. The semicircular canals, which are responsible for the sense of balance and therefore allow for inferences on habitual head orientation and locomotion, are similar to those found in other theropods. In theropods, these structures are generally conservative, suggesting that functional requirements during locomotion have been similar across species. The foremost of the semicircular canals was enlarged, a feature generally found in bipedal animals. The orientation of the lateral semicircular canal indicates that the head and neck were held horizontally in neutral position.
Fusion of metatarsals and paleopathology
The holotype of C. nasicornis was found with its left metatarsals II to IV fused together. Marsh, in 1884, dedicated a short article to this, at the time, unknown feature in dinosaurs, noting the close resemblance to the condition seen in modern birds. The presence of this feature in Ceratosaurus became controversial in 1890, when Georg Baur speculated that the fusion in the holotype was the result of a healed fracture. This claim was repeated in 1892 by Cope, while arguing that C. nasicornis should be classified as a species of Megalosaurus due to insufficient anatomical differences between these genera. However, examples of fused metatarsals in dinosaurs that are not of pathological origin have been described since, including taxa more basal than Ceratosaurus. Osborn, in 1920, explained that no abnormal bone growth is evident and that the fusion is unusual, but likely not pathological. Ronald Ratkevich, in 1976, argued that this fusion had limited the running ability of the animal, but this claim was rejected by Paul in 1988, who noted that the same feature occurs in many fast-moving animals of today, including ground birds and ungulates. A 1999 analysis by Darren Tanke and Bruce Rothschild suggested that the fusion was indeed pathological, confirming the earlier claim of Baur. Other reports of pathologies include a stress fracture in a foot bone assigned to the genus, as well as a broken tooth of an unidentified species of Ceratosaurus that shows signs of further wear received after the break.
Paleoenvironment and paleobiogeography
All North American Ceratosaurus finds come from the Morrison Formation, a sequence of shallow marine and (predominantly) alluvial sedimentary rocks in the western United States and the most fertile source for dinosaur bones of the continent. According to radiometric dating, the age of the formation ranges between 156.3 million years old at its base and 146.8 million years old at the top, which places it in the late Oxfordian, Kimmeridgian, and early Tithonian stages of the Late Jurassic. Ceratosaurus is known from Kimmeridgian and Tithonian strata of the formation. The Morrison Formation is interpreted as a semiarid environment with distinct wet and dry seasons. The Morrison Basin stretched from New Mexico to Alberta and Saskatchewan, being formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels, and floodplains. This formation is similar in age to the Lourinhã Formation in Portugal and the Tendaguru Formation in Tanzania.
The Morrison Formation records an environment and time dominated by gigantic sauropod dinosaurs. Other dinosaurs known from the Morrison include the theropods Koparion, Stokesosaurus, Ornitholestes, Allosaurus, and Torvosaurus, the sauropods Apatosaurus, Brachiosaurus, Camarasaurus, and Diplodocus, and the ornithischians Camptosaurus, Dryosaurus, Nanosaurus, Gargoyleosaurus, and Stegosaurus. Allosaurus, which accounted for 70 to 75% of all theropod specimens, was at the top trophic level of the Morrison food web. Other vertebrates that shared this paleoenvironment included ray-finned fishes, frogs, salamanders, turtles like Uluops, sphenodonts, lizards, terrestrial and aquatic crocodylomorphs such as Hoplosuchus, and several species of pterosaurs such as Harpactognathus and Mesadactylus. Shells of bivalves and aquatic snails are also common. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests of tree ferns and ferns (gallery forests) to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum.
A partial Ceratosaurus specimen indicates the presence of the genus in the Portuguese Porto Novo Member of the Lourinhã Formation. Many of the dinosaurs of the Lourinhã Formation are either the same genera as those seen in the Morrison Formation or have a close counterpart. Besides Ceratosaurus, the researchers also noted that the presence of Allosaurus and Torvosaurus in the Portuguese rocks are primarily known from the Morrison, while Lourinhanosaurus has so far only been reported from Portugal. Herbivorous dinosaurs from the Porto Novo Member include, among others, the sauropods Dinheirosaurus and Zby, as well as the stegosaur Miragaia. During the Late Jurassic, Europe had just been separated from North America by the still narrow Atlantic Ocean. Portugal, as part of the Iberian Peninsula, was still separated from other parts of Europe. According to Mateus and colleagues, the similarity between the Portuguese and North American theropod faunas indicates the presence of a temporary land bridge, allowing for faunal interchange. Malafaia and colleagues, however, argued for a more complex scenario, as other groups, such as sauropods, turtles, and crocodiles, show clearly different species compositions in Portugal and North America. Thus, the incipient separation of these faunas could have led to interchange in some but allopatric speciation in other groups.
| Biology and health sciences | Theropods | Animals |
1105247 | https://en.wikipedia.org/wiki/Alarm%20clock | Alarm clock | An alarm clock or alarm is a clock that is designed to alert an individual or group of people at a specified time. The primary function of these clocks is to awaken people from their night's sleep or short naps; they can sometimes be used for other reminders as well. Most alarm clocks make sounds; some make light or vibration. Some have sensors to identify when a person is in a light stage of sleep, in order to avoid waking someone who is deeply asleep, which causes tiredness, even if the person has had adequate sleep. To turn off the sound or light, a button or handle on the clock is pressed; most clocks automatically turn off the alarm if left unattended long enough. A classic analog alarm clock has an extra hand or inset dial that is used to show the time at which the alarm will ring. Alarm clock functions are also used in mobile phones, watches, and computers.
Many alarm clocks have radio receivers that can be set to start playing at specified times, and are known as clock radios. Additionally, some alarm clocks can set multiple alarms. A progressive alarm clock can have different alarms for different times (see next-generation alarms) and play music of the user's choice. Most modern televisions, computers, mobile phones and digital watches have alarm functions that automatically turn on or sound alerts at a specific time.
Types
Traditional analogue clocks
Traditional mechanical alarm clocks have one or two bells that ring by means of a mainspring that powers a gear to quickly move a hammer back and forth between the two bells, or between the internal sides of a single bell. In some models, the metal cover at back of the clock itself also functions as the bell. In an electronically operated bell-type alarm clock, the bell is rung by an electromagnetic circuit with an armature that turns the circuit on and off repeatedly.
Digital
Digital alarm clocks can make other noises. Simple battery-powered alarm clocks make a loud buzzing, ringing or beeping sound to wake a sleeper, while novelty alarm clocks can speak, laugh, sing, or play sounds from nature.
History
The ancient Greek philosopher Plato (428–348 BCE) was said to possess a large water clock with an unspecified alarm signal similar to the sound of a water organ; he used it at night, possibly for signaling the beginning of his lectures at dawn (Athenaeus 4.174c). The Hellenistic engineer and inventor Ctesibius (fl. 285–222 BCE) fitted his clepsydras with dial and pointer for indicating the time, and added elaborate "alarm systems, which could be made to drop pebbles on a gong, or blow trumpets (by forcing bell-jars down into water and taking the compressed air through a beating reed) at pre-set times" (Vitruv 11.11).
The late Roman statesman Cassiodorus (c. 485–585) advocated in his rulebook for monastic life the water clock as a useful alarm for the "soldiers of Christ" (Cassiod. Inst. 30.4 f.). The Christian rhetorician Procopius described in detail prior to 529 a complex public striking clock in his home town Gaza which featured an hourly gong and figures moving mechanically day and night.
In China, a striking clock was devised by the Buddhist monk and inventor Yi Xing (683–727). The Chinese engineers Zhang Sixun and Su Song integrated striking clock mechanisms in astronomical clocks in the 10th and 11th centuries, respectively. A striking clock outside of China was the water-powered clock tower near the Umayyad Mosque in Damascus, Syria, which struck once every hour. It is the subject of a book, On the Construction of Clocks and their Use (1203), by Riḍwān ibn al-Sāʿātī, the son of clockmaker. In 1235, an early monumental water-powered alarm clock that "announced the appointed hours of prayer and the time both by day and by night" was completed in the entrance hall of the Mustansiriya Madrasah in Baghdad.
From the 14th century, some clock towers in Western Europe were also capable of chiming at a fixed time every day; the earliest of these was described by the Florentine writer Dante Alighieri in 1319. The most famous original striking clock tower still standing is possibly the one in St Mark's Clocktower in St Mark's Square, Venice. The St Mark's Clock was assembled in 1493, by the famous clockmaker Gian Carlo Rainieri from Reggio Emilia, where his father Gian Paolo Rainieri had already constructed another famous device in 1481. In 1497, Simone Campanato moulded the great bell (h. 1,56 m., diameter m. 1,27), which was put on the top of the tower where it was alternatively beaten by the Due Mori (Two Moors), two bronze statues (h. 2,60) handling a hammer.
User-settable mechanical alarm clocks date back at least to 15th-century Europe. These early alarm clocks had a ring of holes in the clock dial and were set by placing a pin in the appropriate hole.
The first American alarm clock was created in 1787 by Levi Hutchins in Concord, New Hampshire. This device he made only for himself, however, and it only rang at 4 am, in order to wake him for his job. The French inventor Antoine Redier was the first to patent an adjustable mechanical alarm clock, in 1847.
Alarm clocks, like almost all other consumer goods in the United States, ceased production in the spring of 1942, as the factories which made them were converted over to war work during World War II, but they were one of the first consumer items to resume manufacture for civilian use, in November 1944. By that time, a critical shortage of alarm clocks had developed due to older clocks wearing out or breaking down. Workers were late for, or missed completely, their scheduled shifts in jobs critical to the war effort. In a pooling arrangement overseen by the Office of Price Administration, several clock companies were allowed to start producing new clocks, some of which were continuations of pre-war designs, and some of which were new designs, thus becoming among the first "postwar" consumer goods to be made, before the war had even ended. The price of these "emergency" clocks was, however, still strictly regulated by the Office of Price Administration.
The first radio alarm clock was invented by James F. Reynolds, in the 1940s and another design was also invented by Paul L. Schroth Sr.
Clock radio
A clock radio is an alarm clock and radio receiver integrated in one device. The clock may turn on the radio at a designated time to wake the user, and usually includes a buzzer alarm. Typically, clock radios are placed on the bedside stand. Some models offer dual alarm for awakening at different times and "snooze", usually a large button on the top that silences the alarm and sets it to resume sounding a few minutes later. Some clock radios also have a "sleep" timer, which turns the radio on for a set amount of time (usually around one hour). This is useful for people who like to fall asleep while listening to the radio.
Newer clock radios are available with other music sources such as iPod, iPhone, and/or audio CD. When the alarm is triggered, it can play a set radio station or the music from a selected music source to awaken the sleeper. Some models come with a dock for iPod/iPhone that also charges the device while it is docked. They can play AM/FM radio, iPod/iPhone or CD like a typical music player as well (without being triggered by the alarm function). A few popular models offer "nature sounds" like rain, forest, wind, sea, waterfall etc., in place of the buzzer.
Clock radios are powered by AC power from the wall socket. In the event of a power interruption, older electronic digital models used to reset the time to midnight (00:00) and lose alarm settings. This would cause failure to trigger the alarm even if the power is restored, such as in the event of a power outage. Many newer clock radios feature a battery backup to maintain the time and alarm settings. Some advanced radio clocks (not to be confused with clocks with AM/FM radios) have a feature which sets the time automatically using signals from atomic clock-synced time signal radio stations such as WWV, making the clock accurate and immune to time reset due to power interruptions.
Alarms in technology
Computer alarms
Alarm clock software programs have been developed for personal computers. There are Web-based alarm clocks, some of which may allow a virtually unlimited number of alarm times (i.e. Personal information manager) and personalized tones. However, unlike mobile phone alarms, online alarm clocks have some limitations. They do not work when the computer is shut off or in sleep mode. Native applications, however, can wake the computer up from sleep using the built-in real-time clock alarm chip or even power it back on after it had been shut down.
Mobile phone alarms
Many modern mobile phones feature built-in alarm clocks that do not need the phone to be switched on for the alarm to ring off. Some of these mobile phones feature the ability for the user to set the alarm's ringtone, and in some cases music can be downloaded to the phone and then chosen to play for waking.
Next-generation alarms
Scientific studies on sleep having shown that sleep stage at awakening is an important factor in amplifying sleep inertia. Alarm clocks involving sleep stage monitoring appeared on the market in 2005. The alarm clocks use sensing technologies such as EEG electrodes and accelerometers to wake people from sleep. Dawn simulators are another technology meant to mediate these effects.
Sleepers can become accustomed to the sound of their alarm clock if it has been used for a period of time, making it less effective. Due to progressive alarm clocks' complex waking procedure, they can deter this adaptation due to the body needing to adapt to more stimuli than just a simple sound alert.
Alarm signals for impaired hearing
The deaf and hard of hearing are often unable to perceive auditory alarms when asleep. They may use specialized alarms, including alarms with flashing lights instead of or in addition to noise. Alarms which can connect to vibrating devices (small ones inserted into pillows, or larger ones placed under bedposts to shake the bed) also exist.
Time switches
Time switches can be used to turn on anything that will awaken a sleeper, and can therefore be used as alarms. Lights, bells, and radio and TV sets can easily be used. More elaborate devices have also been used, such as machines that automatically prepare tea or coffee. A sound is produced when the drink is ready, so the sleeper awakes to find the freshly brewed drink waiting.
| Technology | Clocks | null |
1105295 | https://en.wikipedia.org/wiki/Sill%20%28geology%29 | Sill (geology) | In geology, a sill is a tabular sheet intrusion that has intruded between older layers of sedimentary rock, beds of volcanic lava or tuff, or along the direction of foliation in metamorphic rock. A sill is a concordant intrusive sheet, meaning that it does not cut across preexisting rock beds. Stacking of sills builds a sill complex and a large magma chamber at high magma flux. In contrast, a dike is a discordant intrusive sheet, which does cut across older rocks.
Formation
Sills are fed by dikes, except in unusual locations where they form in nearly vertical beds attached directly to a magma source. The rocks must be brittle and fracture to create the planes along which the magma intrudes the parent rock bodies, whether this occurs along preexisting planes between sedimentary or volcanic beds or weakened planes related to foliation in metamorphic rock. These planes or weakened areas allow the intrusion of a thin sheet-like body of magma paralleling the existing bedding planes, concordant fracture zone, or foliations. Sills run parallel to beds (layers) and foliations in the surrounding country rock. They can be originally emplaced in a horizontal orientation, although tectonic processes may cause subsequent rotation of horizontal sills up to near vertical orientations.
Sills can be confused with solidified lava flows; however, there are several differences between them. Intruded sills will show partial melting and incorporation of the surrounding country rock. On both contact surfaces of the country rock into which the sill has intruded, evidence of heating will be observed (contact metamorphism). Lava flows will show this evidence only on the lower side of the flow. In addition, lava flows will typically show evidence of vesicles (bubbles) where gases escaped into the atmosphere. Because sills form below the surface, even though generally at shallow depths (up to a few kilometers), the pressure of overlying rock means few if any vesicles can form in a sill. Lava flows will also typically show evidence of weathering on their upper surface, whereas sills, if still covered by country rock, typically do not.
Associated ore deposits
Certain layered intrusions are a variety of sill that often contain important ore deposits. Precambrian examples include the Bushveld, Insizwa and the Great Dyke complexes of southern Africa; and the Duluth intrusive complex along Lake Superior, and the Stillwater igneous complex of the United States. Phanerozoic examples are usually smaller and include the Rùm peridotite complex of Scotland and the Skaergaard igneous complex of east Greenland. These intrusions often contain concentrations of gold, platinum, chromium and other rare elements.
Transgressive sills
Despite their concordant nature, many large sills change stratigraphic level within the intruded sequence, with each concordant part of the intrusion linked by relatively short dike-like segments. Such sills are known as transgressive. The geometry of large sill complexes in sedimentary basins has become clearer with the availability of 3D seismic reflection data. Such data has shown that many sills have an overall saucer shape and that many others are at least in part transgressive.
Examples include the Whin Sill and sills within the Karoo basin.
| Physical sciences | Geologic features | Earth science |
3223397 | https://en.wikipedia.org/wiki/Cock-of-the-rock | Cock-of-the-rock | The cocks-of-the-rock, which compose the genus Rupicola, are large cotingid birds native to South America. The first alleged examples of this species were documented during a research expedition led by the explorer and biologist Sir Joshua Wilson in the mid-1700s. They are found in tropical and subtropical rainforests close to rocky areas, where they build their nests. The genus is composed of only two known extant species: the Andean cock-of-the-rock and the smaller Guianan cock-of-the-rock. The Andean cock-of-the-rock is the national bird of Peru.
Both known species exhibit sexual dimorphism: the males are magnificent birds, not only because of their bright orange or red colors, but also because of their very prominent fan-shaped crests. Like some other cotingids, they have a complex courtship behavior, performing impressive lek displays. The females are overall brownish with hints of the brilliant colors of the males. Females build nests on rocky cliffs or large boulders, and raise the young on their own. They usually lay two or three eggs.
Studies and observations have shown that male cocks-of-the-rock are very territorial. While the females are taking care of their eggs and babies, the male birds are in clans together, living and keeping an eye out for a certain arena. The females lived in their nests 625 feet away from their arena.
Except during the mating season, these birds are wary animals and difficult to see in the rainforest canopy. They primarily feed on fruits and berries and may be important dispersal agents for rainforest seeds.
Taxonomy
The genus Rupicola was introduced by the French zoologist Mathurin Jacques Brisson in 1760 with the Guianan cock-of-the-rock (Rupicola rupicola) as the type species. The genus name Rupicola is Neo-Latin for "cliff-dweller" and combines Latin , "rock" and "dweller.
Species
The genus contains two species:
| Biology and health sciences | Tyranni | Animals |
3224219 | https://en.wikipedia.org/wiki/Second%20derivative | Second derivative | In calculus, the second derivative, or the second-order derivative, of a function is the derivative of the derivative of . Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation:
where is acceleration, is velocity, is time, is position, and d is the instantaneous "delta" or change. The last expression is the second derivative of position () with respect to time.
On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way.
Second derivative power rule
The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows:
Notation
The second derivative of a function is usually denoted . That is:
When using Leibniz's notation for derivatives, the second derivative of a dependent variable with respect to an independent variable is written
This notation is derived from the following formula:
Example
Given the function
the derivative of is the function
The second derivative of is the derivative of , namely
Relation to the graph
Concavity
The second derivative of a function can be used to determine the concavity of the graph of . A function whose second derivative is positive is said to be concave up (also referred to as convex), meaning that the tangent line near the point where it touches the function will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (sometimes simply called concave), and its tangent line will lie above the graph of the function near the point of contact.
Inflection points
If the second derivative of a function changes sign, the graph of the function will switch from concave down to concave up, or vice versa. A point where this occurs is called an inflection point. Assuming the second derivative is continuous, it must take a value of zero at any inflection point, although not every point where the second derivative is zero is necessarily a point of inflection.
Second derivative test
The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e., a point where ) is a local maximum or a local minimum. Specifically,
If , then has a local maximum at .
If , then has a local minimum at .
If , the second derivative test says nothing about the point , a possible inflection point.
The reason the second derivative produces these results can be seen by way of a real-world analogy. Consider a vehicle that at first is moving forward at a great velocity, but with a negative acceleration. Clearly, the position of the vehicle at the point where the velocity reaches zero will be the maximum distance from the starting position – after this time, the velocity will become negative and the vehicle will reverse. The same is true for the minimum, with a vehicle that at first has a very negative velocity but positive acceleration.
Limit
It is possible to write a single limit for the second derivative:
The limit is called the second symmetric derivative. The second symmetric derivative may exist even when the (usual) second derivative does not.
The expression on the right can be written as a difference quotient of difference quotients:
This limit can be viewed as a continuous version of the second difference for sequences.
However, the existence of the above limit does not mean that the function has a second derivative. The limit above just gives a possibility for calculating the second derivative—but does not provide a definition. A counterexample is the sign function , which is defined as:
The sign function is not continuous at zero, and therefore the second derivative for does not exist. But the above limit exists for :
Quadratic approximation
Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function . This is the quadratic function whose first and second derivatives are the same as those of at a given point. The formula for the best quadratic approximation to a function around the point is
This quadratic approximation is the second-order Taylor polynomial for the function centered at .
Eigenvalues and eigenvectors of the second derivative
For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming and homogeneous Dirichlet boundary conditions (i.e., where is the eigenvector), the eigenvalues are and the corresponding eigenvectors (also called eigenfunctions) are Here, for
For other well-known cases, see Eigenvalues and eigenvectors of the second derivative.
Generalization to higher dimensions
The Hessian
The second derivative generalizes to higher dimensions through the notion of second partial derivatives. For a function , these include the three second-order partials
and the mixed partials
If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the Hessian. The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. ( | Mathematics | Differential calculus | null |
3224795 | https://en.wikipedia.org/wiki/Scientific%20modelling | Scientific modelling | Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject.
Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling. The following was said by John von Neumann.
There is also an increasing attention to scientific modelling in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is a growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling.
Overview
A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful. Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.
Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.<ref name="tcarotmimanass">Leo Apostel (1961). "Formal study of models". In: The Concept and the Role of the Model in Mathematics and Natural and Social. Edited by Hans Freudenthal. Springer. pp. 8–9 (Source)],</ref>
For the scientist, a model is also a way in which the human thought processes can be amplified. For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture).
Basics
Modelling as a substitute for direct measurement and experimentation
Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modeled estimates of outcomes.
Within modeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints. It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics. Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation.
Simulation
A simulation is a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models.
Structure
Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art.
Systems
A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone. The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time.
Generating a model
Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components.
Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeler's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use.
Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).
Evaluating a model
A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include:
Ability to explain past observations
Ability to predict future observations
Cost of use, especially in combination with other models
Refutability, enabling estimation of the degree of confidence in the model
Simplicity, or even aesthetic appeal
People may attempt to quantify the evaluation of a model using a utility function.
Visualization
Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.
Space mapping
Space mapping refers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model).
Types
Analogical modelling
Assembly modelling
Catastrophe modelling
Choice modelling
Climate model
Computational model
Continuous modelling
Data modelling
Discrete modelling
Document modelling
Econometric model
Economic model
Ecosystem model
Empirical modelling
Enterprise modelling
Futures studies
Geologic modelling
Goal modeling
Homology modelling
Hydrogeology
Hydrography
Hydrologic modelling
Informative modelling
Macroscale modelling
Mathematical modelling
Metabolic network modelling
Microscale modelling
Modelling biological systems
Modelling in epidemiology
Molecular modelling
Multicomputational model
Multiscale modelling
NLP modelling
Phenomenological modelling
Predictive intake modelling
Predictive modelling
Scale modelling
Simulation
Software modelling
Solid modelling
Space mapping
Statistical model
Stochastic modelling (insurance)
Surrogate model
System architecture
System dynamics
Systems modelling
System-level modelling and simulation
Water quality modelling
Applications
Modelling and simulation
One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools.
The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process.
| Physical sciences | Basics | null |
3225400 | https://en.wikipedia.org/wiki/Bicycle%20gearing | Bicycle gearing | Bicycle gearing is the aspect of a bicycle drivetrain that determines the relation between the cadence, the rate at which the rider pedals, and the rate at which the drive wheel turns.
On some bicycles there is only one gear and, therefore, the gear ratio is fixed, but most modern bicycles have multiple gears and thus multiple gear ratios. A shifting mechanism allows selection of the appropriate gear ratio for efficiency or comfort under the prevailing circumstances: for example, it may be comfortable to use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. Different gear ratios and gear ranges are appropriate for different people and styles of cycling.
A cyclist's legs produce power optimally within a narrow pedalling speed range, or cadence. Gearing can be optimized to use this narrow range as efficiently as possible. As in other types of transmissions, the gear ratio is closely related to the mechanical advantage of the drivetrain of the bicycle. On single-speed bicycles and multi-speed bicycles using derailleur gears, the gear ratio depends on the ratio of the number of teeth on the crankset to the number of teeth on the rear sprocket (cogset). For bicycles equipped with hub gears, the gear ratio also depends on the internal planetary gears within the hub. For a shaft-driven bicycle the gear ratio depends on the bevel gears used at each end of the shaft.
For a bicycle to travel at the same speed, using a lower gear (larger mechanical advantage) requires the rider to pedal at a faster cadence, but with less force. Conversely, a higher gear (smaller mechanical advantage) provides a higher speed for a given cadence, but requires the rider to exert greater force or stand while pedalling. Different cyclists may have different preferences for cadence, riding position, and pedalling force. Prolonged exertion of too much force in too high a gear at too low a cadence can increase the chance of knee damage; cadence above 100 rpm becomes less effective after short bursts, as during a sprint.
Measuring gear ratios
Methods
There are at least four different methods for measuring gear ratios: gear inches, metres of development (roll-out), gain ratio, and quoting the number of teeth on the front and rear sprockets respectively. The first three methods result in each possible gear ratio being represented by a single number which allows the gearing of any bicycles to be compared regardless of drive wheel diameter; the numbers produced by different methods are not comparable, but for each method the larger the number the higher the gear. The third method, gain ratio, also takes the length of the crankarm into account, which can vary from bike to bike. The fourth method uses two numbers and is only useful in comparing bicycles with the same drive wheel diameter. In the case of road bikes, this is usually around 670 mm. A 700c "standard" wheel has a 622 mm rim diameter. The final wheel diameter depends on the specific tire but will be approximately 622 mm plus twice the tire width.
Front/rear measurement only considers the sizes of a chainring and a rear sprocket. Gear inches and metres of development also take the size of the rear wheel into account. Gain ratio goes further and also takes the length of a pedal crankarm into account.
Gear inches and metres of development are closely related: to convert from gear inches to metres of development, multiply by 0.08 (more precisely: 0.0798, or exactly: 0.0254 · π).
The methods of calculation which follow assume that any hub gear is in direct drive. Multiplication by a further factor is needed to allow for any other selected hub gear ratio (many online gear calculators have these factors built in for various popular hub gears).
Gear inches = Diameter of drive wheel in inches × (number of teeth in front chainring / number of teeth in rear sprocket). Normally rounded to nearest whole number.
Metres of development = Circumference of drive wheel in metres × (number of teeth in front chainring / number of teeth in rear sprocket).
Gain ratio = (Radius of drive wheel / length of pedal crank) × (number of teeth in front chainring / number of teeth in rear sprocket). Measure radius and length in same units.
Both metres of development and gain ratios are normally rounded to one decimal place.
Gear inches corresponds to the diameter (in inches) of the main wheel of an old-fashioned penny-farthing bicycle with equivalent gearing. Metres of development corresponds to the distance (in metres) traveled by the bicycle for one rotation of the pedals. Gain ratio is the ratio between the distance travelled by the bicycle and the distance travelled by a pedal, and is a pure number, independent of any units of measurement.
Front/rear gear measurement uses two numbers (e.g. 53/19) where the first is the number of teeth in the front chainring and the second is the number of teeth in the rear sprocket. Without doing some arithmetic, it is not immediately obvious that 53/19 and 39/14 represent effectively the same gear ratio.
Examples
The following table provides some comparison of the various methods of measuring gears (the particular numbers are for bicycles with 170 mm cranks, 700C wheels, and 25 mm tyres). Speeds for several cadences in revolutions per minute are also given. On each row the relative values for gear inches, metres of development, gain ratio, and speed are more or less correct, while the front/rear values are the nearest approximation which can be made using typical chainring and cogset sizes. Note that bicycles intended for racing may have a lowest gear of around 45 gear inches (3.6 meters), or 35 gear inches (2.8 meters) if fitted with a compact crankset).
Single speed bicycles
A single-speed bicycle is a type of bicycle with a single gear ratio and a freewheel mechanism. These bicycles are without derailleur gears, hub gearing or other methods for varying the gear ratio of the bicycle. Adult single-speed bicycles typically have a gear ratio of between 55 and 75 gear inches, depending on the rider and the anticipated usage.
There are many types of modern single speed bicycles; BMX bicycles, some bicycles designed for (younger) children, cruiser type bicycles, classic commuter bicycles, unicycles, and bicycles designed for track racing.
Fixed-gear road bicycles and fixed-gear mountain bicycles are also usually single speed in that they typically do not have any gear ratio adjustment. However, fixed gear bicycles do not have a freewheel mechanism to allow coasting.
General considerations
The gearing supplied by the manufacturer on a new bicycle is selected to be useful to the majority of people. Some cyclists choose to fine-tune the gearing to better suit their strength, level of fitness, and expected use. When buying from specialist cycle shops, it may be less expensive to get the gears altered before delivery rather than at some later date. Modern crankset chainrings can be swapped out, as can cogsets.
While long steep hills and/or heavy loads may indicate a need for lower gearing, this can result in a very low speed. Balancing a bicycle becomes more difficult at lower speeds. For example, a bottom gear around 16 gear inches gives an effective speed of perhaps 3 miles/hour (5 km/hour) or less, at which point it might be quicker to walk (bike shoes permitting).
Relative gearing
As far as a cyclist's legs are concerned, when changing gears, the relative difference between two gears is more important than the absolute difference between gears. This relative change, from a lower gear to a higher gear, is normally expressed as a percentage, and is independent of what system is used to measure the gears. Cycling tends to feel more comfortable if nearly all gear changes have more or less the same percentage difference. For example, a change from a 13-tooth sprocket to a 15-tooth sprocket (15.4%) feels very similar to a change from a 20-tooth sprocket to a 23-tooth sprocket (15%), even though the latter has a larger absolute difference.
To achieve such consistent relative differences the absolute gear ratios should be in logarithmic progression; most off-the-shelf cogsets do this with small absolute differences between the smaller sprockets and increasingly larger absolute differences as the sprockets get larger. Because sprockets must have a (relatively small) whole number of teeth it is impossible to achieve a perfect progression; for example the seven derailleur sprockets 14-16-18-21-24-28-32 have an average step size of around 15% but with actual steps varying between 12.5% and 16.7%. The epicyclic gears used within hub gears have more scope for varying the number of teeth than do derailleur sprockets, so it may be possible to get much closer to the ideal of consistent relative differences, e.g. the Rohloff Speedhub offers 14 speeds with an average relative difference of 13.6% and individual variations of around 0.1%.
Racing cyclists often have gears with a small relative difference of around 7% to 10%; this allows fine adjustment of gear ratios to suit the conditions and maintain a consistent pedalling speed. Mountain bikes and hybrid bikes often have gears with a moderate relative difference of around 15%; this allows for a much larger gear range while having an acceptable step between gears. 3-speed hub gears may have a relative difference of some 33% to 37%; such big steps require a very substantial change in pedalling speed and often feel excessive. A step of 7% corresponds to a 1-tooth change from a 14-tooth sprocket to a 15-tooth sprocket, while a step of 15% corresponds to a 2-tooth change from a 13-tooth sprocket to a 15-tooth sprocket.
By contrast, car engines deliver power over a much larger range of speeds than cyclists' legs do, so relative differences of 30% or more are common for car gearboxes.
Usable gears
On a bicycle with only one gear change mechanism (e.g. rear hub only or rear derailleur only), the number of possible gear ratios is the same as the number of usable gear ratios, which is also the same as the number of distinct gear ratios.
On a bicycle with more than one gear change mechanism (e.g. front and rear derailleur), these three numbers can be quite different, depending on the relative gearing steps of the various mechanisms. The number of gears for such a derailleur equipped bike is often stated simplistically, particularly in advertising, and this may be misleading.
Consider a derailleur-equipped bicycle with 3 chainrings and an 8-sprocket cogset:
the number of possible gear ratios is 24 (=3×8, this is the number usually quoted in advertisements);
the number of usable gear ratios is 22;
the number of distinct gear ratios is typically 16 to 18.
The combination of 3 chainrings and an 8-sprocket cogset does not result in 24 usable gear ratios. Instead it provides 3 overlapping ranges of 7, 8, and 7 gear ratios. The outer ranges only have 7 ratios rather than 8 because the extreme combinations (largest chainring to largest rear sprocket, smallest chainring to smallest rear sprocket) result in a very diagonal chain alignment which is inefficient and causes excessive chain wear. Due to the overlap, there will usually be some duplicates or near-duplicates, so that there might only be 16 or 18 distinct gear ratios. It may not be feasible to use these distinct ratios in strict low-high sequence anyway due to the complicated shifting patterns involved (e.g. simultaneous double or triple shift on the rear derailleur and a single shift on the front derailleur). In the worst case there could be only 10 distinct gear ratios, if the percentage step between chainrings is the same as the percentage step between sprockets. However, if the most popular ratio is duplicated then it may be feasible to extend the life of the gear set by using different versions of this popular ratio.
Gearing range
The gearing range indicates the difference between bottom gear and top gear, and provides some measure of the range of conditions (high speed versus steep hills) with which the gears can cope; the strength, experience, and fitness level of the cyclist are also significant. A range of 300% or 3:1 means that for the same pedalling speed a cyclist could travel 3 times as fast in top gear as in bottom gear (assuming sufficient strength, etc.). Conversely, for the same pedalling effort, a cyclist could climb a much steeper hill in bottom gear than in top gear.
The overlapping ranges with derailleur gears mean that 24 or 27 speed derailleur gears may only have the same total gear range as a (much more expensive) Rohloff 14-speed hub gear. Internal hub geared bikes typically have a more restricted gear range than comparable derailleur-equipped bikes, and have fewer ratios within that range.
The approximate gear ranges which follow are merely indicative of typical gearing setups, and will vary somewhat from bicycle to bicycle.
Gear ranges of almost 700% can be achieved on derailleur setups, though this may result in some rather large steps between gears or some awkward gear changes. However, through the careful choice of chainrings and rear cogsets, e.g. 3 chainrings 48-34-20 and a 10-speed cassette 11–32, one can achieve an extremely wide range of gears that are still well spaced. This sort of setup has proven useful on a multitude of bicycles such as cargo bikes, touring bikes and tandems. Even higher gear ranges can be achieved by using a 2-speed bottom bracket hub gear in conjunction with suitable derailleurs.
Types of gear change mechanisms
There are two main types of gear change mechanisms, known as derailleurs and hub gears. Both systems have advantages and disadvantages, and which is preferable depends on the particular circumstances. There are a few other relatively uncommon types of gear change mechanism which are briefly mentioned near the end of this section. Derailleur mechanisms can only be used with chain drive transmissions, so bicycles with belt drive or shaft drive transmissions must either be single speed or use hub gears.
External (derailleur)
External gearing is so called because all the sprockets involved are readily visible. There may be up to 4 chainrings attached to the crankset and pedals, and typically between 5 and 12 sprockets making up the cogset attached to the rear wheel. Modern front and rear derailleurs typically consist of a moveable chain-guide that is operated remotely by a Bowden cable attached to a shifter mounted on the down tube, handlebar stem, or handlebar. A shifter may be a single lever, or a pair of levers, or a twist grip; some shifters may be incorporated with brake levers into a single unit. When a rider operates the shifter while pedalling, the change in cable tension moves the chain-guide from side to side, "derailing" the chain onto different sprockets. The rear derailleur also has spring-mounted jockey wheels which take up any slack in the chain.
Most hybrid, touring, mountain, and racing bicycles are equipped with both front and rear derailleurs. There are a few gear ratios which have a straight chain path, but most of the gear ratios will have the chain running at an angle. The use of two derailleurs generally results in some duplicate or near duplicate gear ratios, so that the number of distinct gear ratios is typically around two-thirds of the number of advertised gear ratios. The more common configurations have specific names which are usually related to the relative step sizes between the front chainrings and the rear cogset.
Crossover gearing
This style is commonly found on mountain, hybrid, and touring bicycles with three chainrings. The relative step on the chainrings (say 25% to 35%) is typically around twice the relative step on the cogset (say 15%), e.g. chainrings 28-38-48 and cogset 12-14-16-18-21-24-28.
Advantages of this arrangement include:
A wide range of gears may be available suitable for touring and for off-road riding.
There is seldom any need to change both front and rear derailleurs simultaneously so it is generally more suitable for casual or inexperienced cyclists.
One disadvantage is that the overlapping gear ranges result in a lot of duplication or near-duplication of gear ratios.
Multi-range gearing
This style is commonly found on racing bicycles with two chainrings. The relative step on the chainrings (say 35%) is typically around three or four times the relative step on the cogset (say 8% or 10%), e.g. chainrings 39-53 and close-range cogsets 12-13-14-15-16-17-19-21 or 12-13-15-17-19-21-23-25. This arrangement provides much more scope for adjusting the gear ratio to maintain a constant pedalling speed, but any change of chainring must be accompanied by a simultaneous change of 3 or 4 sprockets on the cogset if the goal is to switch to the next higher or lower gear ratio.
Alpine gearing
This term has no generally accepted meaning. Originally it referred to a gearing arrangement which had one especially low gear (for climbing Alpine passes); this low gear often had a larger than average jump to the next lowest gear. In the 1960s the term was used by salespeople to refer to then current 10-speed bicycles (2 chainrings, 5-sprocket cogset), without any regard to its original meaning. The nearest current equivalent to the original meaning can be found in the Shimano Megarange cogsets, where most of the sprockets have roughly a 15% relative difference, except for the largest sprocket which has roughly a 30% difference; this provides a much lower gear than normal at the cost of a large gearing jump.
Half-step gearing
There are two chainrings whose relative difference (say 10%) is about half the relative step on the cogset (say 20%). This was used in the mid-20th century when front derailleurs could only handle a small step between chainrings and when rear cogsets only had a small number of sprockets, e.g. chainrings 44-48 and cogset 14-17-20-24-28. The effect is to provide two interlaced gear ranges without any duplication. However to step sequentially through the gear ratios requires a simultaneous front and rear shift on every other gear change.
Half-step plus granny gearing
There are three chainrings with half-step differences between the larger two and multi-range differences between the smaller two, e.g. chainrings 24-42-46 and cogset 12-14-16-18-21-24-28-32-36. This general arrangement is suitable for touring with most gear changes being made using the rear derailleur and occasional fine tuning using the two large chainrings. The small chainring (granny gear) is a bailout for handling steeper hills, but it requires some anticipation in order to use it effectively.
Internal (hub)
Internal gearing is so called because all the gears involved are hidden within a wheel hub. Hub gears work using internal planetary, or epicyclic, gearing which alters the speed of the hub casing and wheel relative to the speed of the drive sprocket. They have just a single chainring and a single rear sprocket, almost always with a straight chain path between the two. Hub gears are available with between 2 and 14 speeds; weight and price tend to increase with the number of gears. All the advertised speeds are available as distinct gear ratios controlled by a single shifter (except for some early 5-speed models which used two shifters). Hub gearing is often used for bicycles intended for city-riding and commuting.
Internal (bottom bracket)
Current systems have gears incorporated in the crankset or bottom bracket. Patents for such systems appeared as early as 1890. The Schlumpf Mountain Drive and Speed Drive have been available since 2001. Some systems offer direct drive plus one of three variants (reduction 1:2.5, increase 1.65:1, and increase 2.5:1). Changing gears is accomplished by using your foot to tap a button protruding on each side of the bottom bracket spindle. The effect is that of having a bicycle with twin chainrings with a massive difference in sizes. Pinion GmbH introduced in 2010 an 18 speed gearbox model, offering an evenly spaced 636% range. This gearbox is actuated by traditional twist shifter and uses two cables for gear changing. The Pinion system is well suited for mountain bicycles due to its wide range and low gravity center suitable for full-suspension bikes, but it is still somewhat heavier than derailleur-based drivetrain.
Internal and external combined
It is sometimes possible to combine a hub gear with deraileur gears. There are several commercially available possibilities:
One standard option for the Brompton folding bicycle is to use a 3-speed hub gear (roughly a 30% difference between gear ratios) in combination with a 2-speed derailleur gear (roughly a 15% difference) to give 6 distinct gear ratios; this is an example of half-step gearing. Some Brompton suppliers offer a 2-speed chainring 'Mountain Drive' as well, which results in 12 distinct gear ratios with a range exceeding 5:1; in this case, the change from 6th to 7th gear involves changing all three sets of gears simultaneously.
The SRAM DualDrive system uses a standard 8 or 9-speed cogset mounted on a three-speed internally geared hub, offering a similar gear range to a bicycle with a cogset and triple chainrings.
Less common is the use of a double or triple chainring in conjunction with an internally geared hub, extending the gear range without having to fit multiple sprockets to the hub. However, this does require a chain tensioner of some sort, negating some of the advantages of hub gears.
At an extreme opposite from a single speed bicycle, hub gears can be combined with both front and rear derailleurs, giving a very wide-ranging drivetrain at the expense of weight and complexity of operation- there are a total of three sets of gears. This approach may be suitable for recumbent trikes, where very low gears can be used without balance issues, and the aerodynamic position allows higher gears than normal.
Others
There have been, and still are, some quite different methods of selecting a different gear ratio:
Retro-direct drivetrains used on some early 20th century bicycles have been resurrected by bicycle hobbyists. These have two possible gear ratios but no gear lever; the operator simply pedals forward for one gear and backward for the other. The chain path is quite complicated, since it effectively has to do a figure of eight as well as follow the normal chain path.
Flip-flop hubs have a double-sided rear wheel with a (different sized) sprocket on each side. To change gear: stop, remove the rear wheel, flip it over, replace the wheel, adjust chain tension, resume cycling. Current double sided wheels typically have a fixed sprocket on one side and a freewheel sprocket on the other.
Prior to 1937 this was the only permitted form of gear changing on the Tour de France. Competitors could have 2 sprockets on each side of the rear wheel, but still had to stop to manually move the chain from one sprocket to the other and adjust the position of the rear wheel so as to maintain the correct chain tension.
Continuously variable transmissions are a relatively new development in bicycles (though not a new idea). Mechanisms like the NuVinci gearing system use balls connected to two disks by static friction - changing the point of contact changes the gear ratio.
Automatic transmissions have been demonstrated and marketed for both derailleur and hub gear mechanisms, often accompanied by a warning to disengage auto-shifting if standing on the pedals. These have met with limited market success.
Moving the connection point on a lever changes the mechanical advantage of a drive system in a way analogous to changing gear ratios. Examples include the American Star Bicycle and the Stringbike.
Efficiency
The numbers in this section apply to the efficiency of the drive-train, including means of transmission and any gearing system. In this context efficiency is concerned with how much power is delivered to the wheel compared with how much power is put into the pedals. For a well-maintained transmission system, efficiency is generally between 86% and 99%, as detailed below.
Factors besides gearing which affect performance include rolling resistance and air resistance:
Rolling resistance can vary by a factor of 10 or more depending on type and dimensions of tire and the tire pressure.
Air resistance increases greatly as speed increases and is the most significant factor at speeds above 10 to 12 miles (15 to 20 km) per hour (the drag force increases in proportion to the square of the speed, thus the power required to overcome it increases in proportion to the cube of the speed).
Human factors can also be significant. Rohloff argues that overall efficiency can be improved in some cases by using a slightly less efficient gear ratio when this leads to greater human efficiency (in converting food to pedal power) because a more effective pedalling speed is being used.
Overview
An encyclopedic overview can be found in Chapter 9 of "Bicycling Science" which covers both theory and experimental results. Some details extracted from these and other experiments are provided in the next subsection, with references to the original reports.
Factors which have been shown to affect the drive-train efficiency include the type of transmission system (chain, shaft, belt), the type of gearing system (fixed, derailleur, hub, infinitely variable), the size of the sprockets used, the magnitude of the input power, the pedalling speed, and how rusty the chain is. For a particular gearing system, different gear ratios generally have different efficiencies.
Some experiments have used an electric motor to drive the shaft to which the pedals are attached, while others have used averages of a number of actual cyclists. It is not clear how the steady power delivered by a motor compares with the cyclic power provided by pedals. Rohloff argues that the constant motor power should match the peak pedal power rather than the average (which is half the peak).
There is little independent information available relating to the efficiency of belt drives and infinitely variable gear systems; even the manufacturers/suppliers appear reluctant to provide any numbers.
Details
Derailleur type mechanisms of a typical mid-range product (of the sort used by serious amateurs) achieve between 88% and 99% mechanical efficiency at 100 W. In derailleur mechanisms the highest efficiency is achieved by the larger sprockets. Efficiency generally decreases with smaller sprocket and chainring sizes.
Derailleur efficiency is also compromised with cross-chaining, or running large-ring to large-sprocket or small-ring to small-sprocket. This cross-chaining also results in increased wear because of the lateral deflection of the chain.
Chester Kyle and Frank Berto reported in "Human Power" 52 (Summer 2001) that testing on three derailleur systems (from 4 to 27 gears) and eight gear hub transmissions (from 3 to 14 gears), performed with 80 W, 150 W, 200 W inputs, gave results as follows:
Efficiency testing of bicycle gearing systems is complicated by a number of factors in particular, all systems tend to be better at higher power rates. 200 W will drive a typical bicycle at , while athletes can achieve 400 W, at which point efficiencies "approaching 98%" are claimed.
At a more typical 150 W, hub-gears tend to be around 2% less efficient than a derailleur system assuming that both systems are well maintained.
| Technology | Human-powered transport | null |
3225759 | https://en.wikipedia.org/wiki/Caryophyllene | Caryophyllene | Caryophyllene (), more formally (−)-β-caryophyllene (BCP), is a natural bicyclic sesquiterpene that occurs widely in nature. Caryophyllene is notable for having a cyclobutane ring, as well as a trans-double bond in a 9-membered ring, both rarities in nature.
Production
Caryophyllene can be produced synthetically, but it is invariably obtained from natural sources because it is widespread. It is a constituent of many essential oils, especially clove oil, the oil from the stems and flowers of Syzygium aromaticum (cloves), the essential oil of Cannabis sativa, copaiba, rosemary, and hops. It is usually found as a mixture with isocaryophyllene (the cis double bond isomer) and α-humulene (obsolete name: α-caryophyllene), a ring-opened isomer.
Caryophyllene is one of the chemical compounds that contributes to the aroma of black pepper.
Basic research
β-Caryophyllene is under basic research for its potential action as an agonist of the cannabinoid receptor type 2 (CB2 receptor). In other basic studies, β-caryophyllene has a binding affinity of Ki = 155 nM at the CB2 receptors.
β-Caryophyllene has the highest cannabinoid activity compared to the ring opened isomer α-caryophyllene humulene which may modulate CB2 activity. To compare binding, cannabinol binds to the CB2 receptors as a partial agonist with an affinity of Ki = 126.4 nM, while delta-9-tetrahydrocannabinol binds to the CB2 receptors as a partial agonist with an affinity of Ki = 36 nM.
Safety
Caryophyllene has been given generally recognized as safe (GRAS) designation by the FDA and is approved by the FDA for use as a food additive, typically for flavoring. Rats given up to 700 mg/kg daily for 90 days did not produce any significant toxic effects. Caryophyllene has an of 5,000 mg/kg in mice.
Metabolism and derivatives
14-Hydroxycaryophyllene oxide (C15H24O2) was isolated from the urine of rabbits treated with (−)-caryophyllene (C15H24). The X-ray crystal structure of 14-hydroxycaryophyllene (as its acetate derivative) has been reported.
The metabolism of caryophyllene progresses through (−)-caryophyllene oxide (C15H24O) since the latter compound also afforded 14-hydroxycaryophyllene (C15H24O) as a metabolite.
Caryophyllene (C15H24) → caryophyllene oxide (C15H24O) → 14-hydroxycaryophyllene (C15H24O) → 14-hydroxycaryophyllene oxide (C15H24O2).
Caryophyllene oxide, in which the alkene group of caryophyllene has become an epoxide, is the component responsible for cannabis identification by drug-sniffing dogs and is also an approved food additive, often as flavoring. Caryophyllene oxide may have negligible cannabinoid activity.
Natural sources
The approximate quantity of caryophyllene in the essential oil of each source is given in square brackets ([ ]):
Cannabis (Cannabis sativa) [3.8–37.5% of cannabis flower essential oil]
Black caraway (Carum nigrum) [7.8%]
Cloves (Syzygium aromaticum) [1.7–19.5% of clove bud essential oil]
Hops (Humulus lupulus) [5.1–14.5%]
Basil (Ocimum spp.) [5.3–10.5% O. gratissimum; 4.0–19.8% O. micranthum]
Oregano (Origanum vulgare) [4.9–15.7%]
Black pepper (Piper nigrum) [7.29%]
Lavender (Lavandula angustifolia) [4.62–7.55% of lavender oil]
Rosemary (Rosmarinus officinalis) [0.1–8.3%]
True cinnamon (Cinnamomum verum) [6.9–11.1%]
Malabathrum (Cinnamomum tamala) [25.3%]
Ylang-ylang (Cananga odorata) [3.1–10.7%]
Copaiba oil (Copaifera)
Biosynthesis
Caryophyllene is a common sesquiterpene among plant species. It is biosynthesized from the common terpene precursors dimethylallyl pyrophosphate (DMAPP) and isopentenyl pyrophosphate (IPP). First, single units of DMAPP and IPP are reacted via an SN1-type reaction with the loss of pyrophosphate, catalyzed by the enzyme GPPS2, to form geranyl pyrophosphate (GPP). This further reacts with a second unit of IPP, also via an SN1-type reaction catalyzed by the enzyme IspA, to form farnesyl pyrophosphate (FPP). Finally, FPP undergoes QHS1 enzyme-catalyzed intramolecular cyclization to form caryophyllene.
Compendial status
Food Chemicals Codex
| Physical sciences | Terpenes and terpenoids | Chemistry |
3227482 | https://en.wikipedia.org/wiki/Humulene | Humulene | Humulene, also known as α-humulene or α-caryophyllene, is a naturally occurring monocyclic sesquiterpene (C15H24), containing an 11-membered ring and consisting of 3 isoprene units containing three nonconjugated C=C double bonds, two of them being triply substituted and one being doubly substituted. It was first found in the essential oils of Humulus lupulus (hops), from which it derives its name. Humulene is an isomer of β-caryophyllene, and the two are often found together as a mixture in many aromatic plants.
Occurrence
Humulene is one of the components of the essential oil from the flowering cone of the hops plant, Humulus lupulus, from which it derives its name. The concentration of humulene varies among different varieties of the plant but can be up to 40% of the essential oil. Humulene and its reaction products in the brewing process of beer gives many beers their “hoppy” aroma. Noble hop varieties have been found to have higher levels of humulene, while other bitter hop varieties contain low levels. Multiple epoxides of humulene are produced in the brewing process. In a scientific study involving gas chromatography–mass spectrometry analysis of samples and a trained sensory panel, it was found that the hydrolysis products of humulene epoxide II specifically produces a “hoppy” aroma in beer.
α-Humulene has been found in many aromatic plants on all continents, often together with its isomer β-caryophyllene. Proven α-humulene emitters into the atmosphere are pine trees, orange orchards, marsh elders, tobacco, and sunflower fields. α-Humulene is contained in the essential oils of aromatic plants such as Salvia officinalis (common sage, culinary sage), Lindera strychnifolia Uyaku or Japanese spicebush, ginseng species, up to 29.9% of the essential oils of Mentha spicata, the ginger family (Zingiberaceae), 10% of the leaf oil of Litsea mushaensis, a Chinese laurel tree, 4% of the leaf extract of Cordia verbenacea, a bush in coastal tropical South America (erva baleeira), but with 25% trans-caryophyllene and is one of the chemical compounds that contribute to the taste of the spice Persicaria odorata or Vietnamese coriander and the characteristic aroma of Cannabis.
Preparation and synthesis
Humulene is one of many sesquiterpenoids that are derived from farnesyl diphosphate (FPP). The formation of humulene from FPP is catalyzed by sesquiterpene synthesis enzymes.
This biosynthesis can be mimicked in the laboratory by preparing allylic stannane from farnesol, termed Corey synthesis. There are diverse ways to synthesize humulene in the laboratory, involving differing closures of the C-C bond in the macrocycle. The McMurry synthesis uses a titanium-catalyzed carbonyl coupling reaction; the Takahashi synthesis uses intramolecular alkylation of an allyl halide by a protected cyanohydrin anion; the Suginome synthesis utilizes a geranyl fragment; and the de Groot synthesis synthesizes humulene from a crude distillate of eucalyptus oil. Humulene can also be synthesized using a combination of four-component assembly and palladium-mediated cyclization, outlined below. This synthesis is noteworthy for the simplicity of the C−C bond constructions and cyclization steps, which it is believed will prove advantageous in the synthesis of related polyterpenoids.
To understand humulene's regioselectivity, the fact that one of the two triply substituted C═C double bonds is significantly more reactive, its conformational space was explored computationally and four different conformations were identified.
Research
In laboratory studies, humulene is being studied for potential anti-inflammatory effects.
In 2015 researchers in Brazil identified α-humulene as an active contributor to the insect repellent properties of Commiphora leptophloeos leaf oil, specifically against “the yellow fever mosquito,” Aedes aegypti.
Atmospheric chemistry
α-Humulene is a biogenic volatile organic compound, emitted by numerous plants (see occurrence) with a relatively high potential for secondary organic aerosol formation in the atmosphere. It quickly reacts with ozone in sunlight (photooxidation) to form oxygenated products. α-Humulene has a very high reaction rate coefficient (1.17 × 10−14 cm3 molecule−1 s−1) compared to most monoterpenes. Since it contains three double bonds, first-, second- and third-generation products are possible that can each condense to form secondary organic aerosol. At typical tropospheric ozone mixing ratios of 30 ppb the lifetime of α-humulene is about 2 min, while the first- and second-generation products have average lifetimes of 1 h and 12.5 h, respectively.
| Physical sciences | Terpenes and terpenoids | Chemistry |
1677334 | https://en.wikipedia.org/wiki/Rate%20equation | Rate equation | In chemistry, the rate equation (also known as the rate law or empirical differential rate equation) is an empirical differential mathematical expression for the reaction rate of a given reaction in terms of concentrations of chemical species and constant parameters (normally rate coefficients and partial orders of reaction) only. For many reactions, the initial rate is given by a power law such as
where and are the molar concentrations of the species and usually in moles per liter (molarity, ). The exponents and are the partial orders of reaction for and and the overall reaction order is the sum of the exponents. These are often positive integers, but they may also be zero, fractional, or negative. The order of reaction is a number which quantifies the degree to which the rate of a chemical reaction depends on concentrations of the reactants. In other words, the order of reaction is the exponent to which the concentration of a particular reactant is raised. The constant is the reaction rate constant or rate coefficient and at very few places velocity constant or specific rate of reaction. Its value may depend on conditions such as temperature, ionic strength, surface area of an adsorbent, or light irradiation. If the reaction goes to completion, the rate equation for the reaction rate applies throughout the course of the reaction.
Elementary (single-step) reactions and reaction steps have reaction orders equal to the stoichiometric coefficients for each reactant. The overall reaction order, i.e. the sum of stoichiometric coefficients of reactants, is always equal to the molecularity of the elementary reaction. However, complex (multi-step) reactions may or may not have reaction orders equal to their stoichiometric coefficients. This implies that the order and the rate equation of a given reaction cannot be reliably deduced from the stoichiometry and must be determined experimentally, since an unknown reaction mechanism could be either elementary or complex. When the experimental rate equation has been determined, it is often of use for deduction of the reaction mechanism.
The rate equation of a reaction with an assumed multi-step mechanism can often be derived theoretically using quasi-steady state assumptions from the underlying elementary reactions, and compared with the experimental rate equation as a test of the assumed mechanism. The equation may involve a fractional order, and may depend on the concentration of an intermediate species.
A reaction can also have an undefined reaction order with respect to a reactant if the rate is not simply proportional to some power of the concentration of that reactant; for example, one cannot talk about reaction order in the rate equation for a bimolecular reaction between adsorbed molecules:
Definition
Consider a typical chemical reaction in which two reactants A and B combine to form a product C:
This can also be written
The prefactors −1, −2 and 3 (with negative signs for reactants because they are consumed) are known as stoichiometric coefficients. One molecule of A combines with two of B to form 3 of C, so if we use the symbol [X] for the molar concentration of chemical X,
If the reaction takes place in a closed system at constant temperature and volume, without a build-up of reaction intermediates, the reaction rate is defined as
where is the stoichiometric coefficient for chemical Xi, with a negative sign for a reactant.
The initial reaction rate has some functional dependence on the concentrations of the reactants,
and this dependence is known as the rate equation or rate law. This law generally cannot be deduced from the chemical equation and must be determined by experiment.
Power laws
A common form for the rate equation is a power law:
The constant is called the rate constant. The exponents, which can be fractional, are called partial orders of reaction and their sum is the overall order of reaction.
In a dilute solution, an elementary reaction (one having a single step with a single transition state) is empirically found to obey the law of mass action. This predicts that the rate depends only on the concentrations of the reactants, raised to the powers of their stoichiometric coefficients.
The differential rate equation for an elementary reaction using mathematical product notation is:
Where:
is the rate of change of reactant concentration with respect to time.
k is the rate constant of the reaction.
represents the concentrations of the reactants, raised to the powers of their stoichiometric coefficients and multiplied together.
Determination of reaction order
Method of initial rates
The natural logarithm of the power-law rate equation is
This can be used to estimate the order of reaction of each reactant. For example, the initial rate can be measured in a series of experiments at different initial concentrations of reactant with all other concentrations kept constant, so that
The slope of a graph of as a function of then corresponds to the order with respect to reactant .
However, this method is not always reliable because
measurement of the initial rate requires accurate determination of small changes in concentration in short times (compared to the reaction half-life) and is sensitive to errors, and
the rate equation will not be completely determined if the rate also depends on substances not present at the beginning of the reaction, such as intermediates or products.
Integral method
The tentative rate equation determined by the method of initial rates is therefore normally verified by comparing the concentrations measured over a longer time (several half-lives) with the integrated form of the rate equation; this assumes that the reaction goes to completion.
For example, the integrated rate law for a first-order reaction is
where is the concentration at time and is the initial concentration at zero time. The first-order rate law is confirmed if is in fact a linear function of time. In this case the rate constant is equal to the slope with sign reversed.
Method of flooding
The partial order with respect to a given reactant can be evaluated by the method of flooding (or of isolation) of Ostwald. In this method, the concentration of one reactant is measured with all other reactants in large excess so that their concentration remains essentially constant. For a reaction with rate law the partial order with respect to is determined using a large excess of . In this case
with
and may be determined by the integral method. The order with respect to under the same conditions (with in excess) is determined by a series of similar experiments with a range of initial concentration so that the variation of can be measured.
Zero order
For zero-order reactions, the reaction rate is independent of the concentration of a reactant, so that changing its concentration has no effect on the rate of the reaction. Thus, the concentration changes linearly with time. The rate law for zero order reaction is
The unit of k is mol dm−3 s−1. This may occur when there is a bottleneck which limits the number of reactant molecules that can react at the same time, for example if the reaction requires contact with an enzyme or a catalytic surface.
Many enzyme-catalyzed reactions are zero order, provided that the reactant concentration is much greater than the enzyme concentration which controls the rate, so that the enzyme is saturated. For example, the biological oxidation of ethanol to acetaldehyde by the enzyme liver alcohol dehydrogenase (LADH) is zero order in ethanol.
Similarly reactions with heterogeneous catalysis can be zero order if the catalytic surface is saturated. For example, the decomposition of phosphine () on a hot tungsten surface at high pressure is zero order in phosphine, which decomposes at a constant rate.
In homogeneous catalysis zero order behavior can come about from reversible inhibition. For example, ring-opening metathesis polymerization using third-generation Grubbs catalyst exhibits zero order behavior in catalyst due to the reversible inhibition that occurs between pyridine and the ruthenium center.
First order
A first order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but their concentration has no effect on the rate. The rate law for a first order reaction is
The unit of k is s−1. Although not affecting the above math, the majority of first order reactions proceed via intermolecular collisions. Such collisions, which contribute the energy to the reactant, are necessarily second order. However according to the Lindemann mechanism the reaction consists of two steps: the bimolecular collision which is second order and the reaction of the energized molecule which is unimolecular and first order. The rate of the overall reaction depends on the slowest step, so the overall reaction will be first order when the reaction of the energized reactant is slower than the collision step.
The half-life is independent of the starting concentration and is given by . The mean lifetime is τ = 1/k.
Examples of such reactions are:
2N2O5 -> 4NO2 + O2
[CoCl(NH3)5]^2+ + H2O -> [Co(H2O)(NH3)5]^3+ + Cl-
H2O2 -> H2O + 1/2O2
In organic chemistry, the class of SN1 (nucleophilic substitution unimolecular) reactions consists of first-order reactions. For example, in the reaction of aryldiazonium ions with nucleophiles in aqueous solution, , the rate equation is where Ar indicates an aryl group.
Second order
A reaction is said to be second order when the overall order is two. The rate of a second-order reaction may be proportional to one concentration squared, or (more commonly) to the product of two concentrations, As an example of the first type, the reaction is second-order in the reactant and zero order in the reactant CO. The observed rate is given by and is independent of the concentration of CO.
For the rate proportional to a single concentration squared, the time dependence of the concentration is given by
The unit of k is mol−1 dm3 s−1.
The time dependence for a rate proportional to two unequal concentrations is
if the concentrations are equal, they satisfy the previous equation.
The second type includes nucleophilic addition-elimination reactions, such as the alkaline hydrolysis of ethyl acetate:
CH3COOC2H5 + OH- -> CH3COO- + C2H5OH
This reaction is first-order in each reactant and second-order overall:
If the same hydrolysis reaction is catalyzed by imidazole, the rate equation becomes
The rate is first-order in one reactant (ethyl acetate), and also first-order in imidazole, which as a catalyst does not appear in the overall chemical equation.
Another well-known class of second-order reactions are the SN2 (bimolecular nucleophilic substitution) reactions, such as the reaction of n-butyl bromide with sodium iodide in acetone:
CH3CH2CH2CH2Br + NaI -> CH3CH2CH2CH2I + NaBr(v)
This same compound can be made to undergo a bimolecular (E2) elimination reaction, another common type of second-order reaction, if the sodium iodide and acetone are replaced with sodium tert-butoxide as the salt and tert-butanol as the solvent:
CH3CH2CH2CH2Br + NaO\mathit{t}-Bu -> CH3CH2CH=CH2 + NaBr + HO\mathit{t}-Bu
Pseudo-first order
If the concentration of a reactant remains constant (because it is a catalyst, or because it is in great excess with respect to the other reactants), its concentration can be included in the rate constant, leading to a pseudo–first-order (or occasionally pseudo–second-order) rate equation. For a typical second-order reaction with rate equation if the concentration of reactant B is constant then where the pseudo–first-order rate constant The second-order rate equation has been reduced to a pseudo–first-order rate equation, which makes the treatment to obtain an integrated rate equation much easier.
One way to obtain a pseudo-first order reaction is to use a large excess of one reactant (say, [B]≫[A]) so that, as the reaction progresses, only a small fraction of the reactant in excess (B) is consumed, and its concentration can be considered to stay constant. For example, the hydrolysis of esters by dilute mineral acids follows pseudo-first order kinetics, where the concentration of water is constant because it is present in large excess:
CH3COOCH3 + H2O -> CH3COOH + CH3OH
The hydrolysis of sucrose () in acid solution is often cited as a first-order reaction with rate The true rate equation is third-order, however, the concentrations of both the catalyst and the solvent are normally constant, so that the reaction is pseudo–first-order.
Summary for reaction orders 0, 1, 2, and n
Elementary reaction steps with order 3 (called ternary reactions) are rare and unlikely to occur. However, overall reactions composed of several elementary steps can, of course, be of any (including non-integer) order.
Here stands for concentration in molarity (mol · L−1), for time, and for the reaction rate constant. The half-life of a first-order reaction is often expressed as t1/2 = 0.693/k (as ln(2)≈0.693).
Fractional order
In fractional order reactions, the order is a non-integer, which often indicates a chemical chain reaction or other complex reaction mechanism. For example, the pyrolysis of acetaldehyde () into methane and carbon monoxide proceeds with an order of 1.5 with respect to acetaldehyde: The decomposition of phosgene () to carbon monoxide and chlorine has order 1 with respect to phosgene itself and order 0.5 with respect to chlorine:
The order of a chain reaction can be rationalized using the steady state approximation for the concentration of reactive intermediates such as free radicals. For the pyrolysis of acetaldehyde, the Rice-Herzfeld mechanism is
Initiation CH3CHO -> .CH3 + .CHO
Propagation .CH3 + CH3CHO -> CH3CO. + CH4
CH3CO. -> .CH3 + CO
Termination 2 .CH3 -> C2H6
where • denotes a free radical. To simplify the theory, the reactions of the to form a second are ignored.
In the steady state, the rates of formation and destruction of methyl radicals are equal, so that
so that the concentration of methyl radical satisfies
[.CH3] \quad\propto \quad[CH3CHO]^{1/2}.
The reaction rate equals the rate of the propagation steps which form the main reaction products and CO:
in agreement with the experimental order of 3/2.
Complex laws
Mixed order
More complex rate laws have been described as being mixed order if they approximate to the laws for more than one order at different concentrations of the chemical species involved. For example, a rate law of the form represents concurrent first order and second order reactions (or more often concurrent pseudo-first order and second order) reactions, and can be described as mixed first and second order. For sufficiently large values of [A] such a reaction will approximate second order kinetics, but for smaller [A] the kinetics will approximate first order (or pseudo-first order). As the reaction progresses, the reaction can change from second order to first order as reactant is consumed.
Another type of mixed-order rate law has a denominator of two or more terms, often because the identity of the rate-determining step depends on the values of the concentrations. An example is the oxidation of an alcohol to a ketone by hexacyanoferrate (III) ion [Fe(CN)63−] with ruthenate (VI) ion (RuO42−) as catalyst. For this reaction, the rate of disappearance of hexacyanoferrate (III) is
This is zero-order with respect to hexacyanoferrate (III) at the onset of the reaction (when its concentration is high and the ruthenium catalyst is quickly regenerated), but changes to first-order when its concentration decreases and the regeneration of catalyst becomes rate-determining.
Notable mechanisms with mixed-order rate laws with two-term denominators include:
Michaelis–Menten kinetics for enzyme-catalysis: first-order in substrate (second-order overall) at low substrate concentrations, zero order in substrate (first-order overall) at higher substrate concentrations; and
the Lindemann mechanism for unimolecular reactions: second-order at low pressures, first-order at high pressures.
Negative order
A reaction rate can have a negative partial order with respect to a substance. For example, the conversion of ozone (O3) to oxygen follows the rate equation in an excess of oxygen. This corresponds to second order in ozone and order (−1) with respect to oxygen.
When a partial order is negative, the overall order is usually considered as undefined. In the above example, for instance, the reaction is not described as first order even though the sum of the partial orders is , because the rate equation is more complex than that of a simple first-order reaction.
Opposed reactions
A pair of forward and reverse reactions may occur simultaneously with comparable speeds. For example, A and B react into products P and Q and vice versa (a, b, p, and q are the stoichiometric coefficients):
{\mathit{a}A} + {\mathit{b}B} <=> {\mathit{p}P} + {\mathit{q}Q}
The reaction rate expression for the above reactions (assuming each one is elementary) can be written as:
where: k1 is the rate coefficient for the reaction that consumes A and B; k−1 is the rate coefficient for the backwards reaction, which consumes P and Q and produces A and B.
The constants k1 and k−1 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set v=0 in balance):
Simple example
In a simple equilibrium between two species:
A <=> P
where the reaction starts with an initial concentration of reactant A, [A]0, and an initial concentration of 0 for product P at time t=0.
Then the equilibrium constant K is expressed as:
where and are the concentrations of A and P at equilibrium, respectively.
The concentration of A at time t, , is related to the concentration of P at time t, , by the equilibrium reaction equation:
[A]_\mathit{t} = [A]0 - [P]_\mathit{t}
The term [P]0 is not present because, in this simple example, the initial concentration of P is 0.
This applies even when time t is at infinity; i.e., equilibrium has been reached:
[A]_\mathit{e} = [A]0 - [P]_\mathit{e}
then it follows, by the definition of K, that
and, therefore,
These equations allow us to uncouple the system of differential equations, and allow us to solve for the concentration of A alone.
The reaction equation was given previously as:
For A <=> P this is simply
The derivative is negative because this is the rate of the reaction going from A to P, and therefore the concentration of A is decreasing. To simplify notation, let x be , the concentration of A at time t. Let be the concentration of A at equilibrium. Then:
Since:
the reaction rate becomes:
which results in:
.
A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope k1 + k−1. By measurement of [A]e and [P]e the values of K and the two reaction rate constants will be known.
Generalization of simple example
If the concentration at the time t = 0 is different from above, the simplifications above are invalid, and a system of differential equations must be solved. However, this system can also be solved exactly to yield the following generalized expressions:
When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy.
Consecutive reactions
If the rate constants for the following reaction are and ; A -> B -> C , then the rate equation is:
For reactant A:
For reactant B:
For product C:
With the individual concentrations scaled by the total population of reactants to become probabilities, linear systems of differential equations such as these can be formulated as a master equation. The differential equations can be solved analytically and the integrated rate equations are
The steady state approximation leads to very similar results in an easier way.
Parallel or competitive reactions
When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place.
Two first order reactions
A -> B and A -> C , with constants and and rate equations ; and
The integrated rate equations are then ; and
.
One important relationship in this case is
One first order and one second order reaction
This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example, A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: A + H2O -> B and A + R -> C . The rate equations are: and , where is the pseudo first order constant.
The integrated rate equation for the main product [C] is , which is equivalent to . Concentration of B is related to that of C through
The integrated equations were analytically obtained but during the process it was assumed that . Therefore, previous equation for [C] can only be used for low concentrations of [C] compared to [A]0
Stoichiometric reaction networks
The most general description of a chemical reaction network considers a number of distinct chemical species reacting via reactions.
The chemical equation of the -th reaction can then be written in the generic form
which is often written in the equivalent form
Here
is the reaction index running from 1 to ,
denotes the -th chemical species,
is the rate constant of the -th reaction and
and are the stoichiometric coefficients of reactants and products, respectively.
The rate of such a reaction can be inferred by the law of mass action
which denotes the flux of molecules per unit time and unit volume. Here ([\mathbf X])=([X1], [X2], \ldots ,[X_\mathit{N}]) is the vector of concentrations. This definition includes the elementary reactions:
zero order reactions
for which for all ,
first order reactions
for which for a single ,
second order reactions
for which for exactly two ; that is, a bimolecular reaction, or for a single ; that is, a dimerization reaction.
Each of these is discussed in detail below. One can define the stoichiometric matrix
denoting the net extent of molecules of in reaction . The reaction rate equations can then be written in the general form
This is the product of the stoichiometric matrix and the vector of reaction rate functions.
Particular simple solutions exist in equilibrium, , for systems composed of merely reversible reactions. In this case, the rate of the forward and backward reactions are equal, a principle called detailed balance. Detailed balance is a property of the stoichiometric matrix alone and does not depend on the particular form of the rate functions . All other cases where detailed balance is violated are commonly studied by flux balance analysis, which has been developed to understand metabolic pathways.
General dynamics of unimolecular conversion
For a general unimolecular reaction involving interconversion of different species, whose concentrations at time are denoted by through , an analytic form for the time-evolution of the species can be found. Let the rate constant of conversion from species to species be denoted as , and construct a rate-constant matrix whose entries are the .
Also, let be the vector of concentrations as a function of time.
Let be the vector of ones.
Let be the identity matrix.
Let be the function that takes a vector and constructs a diagonal matrix whose on-diagonal entries are those of the vector.
Let be the inverse Laplace transform from to .
Then the time-evolved state is given by
thus providing the relation between the initial conditions of the system and its state at time .
| Physical sciences | Kinetics | Chemistry |
1677930 | https://en.wikipedia.org/wiki/Seed%20dispersal | Seed dispersal | In spermatophyte plants, seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their seeds, including both abiotic vectors, such as the wind, and living (biotic) vectors such as birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time.
The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus.
These modes are typically inferred based on adaptations, such as wings or fleshy fruit. However, this simplified view may ignore complexity in dispersal. Plants can disperse via modes without possessing the typical associated adaptations and plant traits may be multifunctional.
Benefits
Seed dispersal is likely to have several benefits for different plant species. Seed survival is often higher away from the parent plant. This higher survival may result from the actions of density-dependent seed and seedling predators and pathogens, which often target the high concentrations of seeds beneath adults. Competition with adult plants may also be lower when seeds are transported away from their parent.
Seed dispersal also allows plants to reach specific habitats that are favorable for survival, a hypothesis known as directed dispersal. For example, Ocotea endresiana (Lauraceae) is a tree species from Latin America which is dispersed by several species of birds, including the three-wattled bellbird. Male bellbirds perch on dead trees in order to attract mates, and often defecate seeds beneath these perches where the seeds have a high chance of survival because of high light conditions and escape from fungal pathogens.
In the case of fleshy-fruited plants, seed-dispersal in animal guts (endozoochory) often enhances the amount, the speed, and the asynchrony of germination, which can have important plant benefits.
Seeds dispersed by ants (myrmecochory) are not only dispersed short distances but are also buried underground by the ants. These seeds can thus avoid adverse environmental effects such as fire or drought, reach nutrient-rich microsites and survive longer than other seeds. These features are peculiar to myrmecochory, which may thus provide additional benefits not present in other dispersal modes.
Seed dispersal may also allow plants to colonize vacant habitats and even new geographic regions. Dispersal distances and deposition sites depend on the movement range of the disperser, and longer dispersal distances are sometimes accomplished through diplochory, the sequential dispersal by two or more different dispersal mechanisms. In fact, recent evidence suggests that the majority of seed dispersal events involves more than one dispersal phase.
Types
Seed dispersal is sometimes split into autochory (when dispersal is attained using the plant's own means) and allochory (when obtained through external means).
Long distance
Long-distance seed dispersal (LDD) is a type of spatial dispersal that is currently defined by two forms, proportional and actual distance. A plant's fitness and survival may heavily depend on this method of seed dispersal depending on certain environmental factors. The first form of LDD, proportional distance, measures the percentage of seeds (1% out of total number of seeds produced) that travel the farthest distance out of a 99% probability distribution. The proportional definition of LDD is in actuality a descriptor for more extreme dispersal events. An example of LDD would be that of a plant developing a specific dispersal vector or morphology in order to allow for the dispersal of its seeds over a great distance. The actual or absolute method identifies LDD as a literal distance. It classifies 1 km as the threshold distance for seed dispersal. Here, threshold means the minimum distance a plant can disperse its seeds and have it still count as LDD. There is a second, unmeasurable, form of LDD besides proportional and actual. This is known as the non-standard form. Non-standard LDD is when seed dispersal occurs in an unusual and difficult-to-predict manner. An example would be a rare or unique incident in which a normally-lemur-dependent deciduous tree of Madagascar was to have seeds transported to the coastline of South Africa via attachment to a mermaid purse (egg case) laid by a shark or skate. A driving factor for the evolutionary significance of LDD is that it increases plant fitness by decreasing neighboring plant competition for offspring. However, it is still unclear today as to how specific traits, conditions and trade-offs (particularly within short seed dispersal) affect LDD evolution.
Autochory
Autochorous plants disperse their seed without any help from an external vector. This limits considerably the distance they can disperse their seed.
Two other types of autochory not described in detail here are blastochory, where the stem of the plant crawls along the ground to deposit its seed far from the base of the plant; and herpochory, where the seed crawls by means of trichomes or hygroscopic appendages (awns) and changes in humidity.
Gravity
Barochory or the plant use of gravity for dispersal is a simple means of achieving seed dispersal. The effect of gravity on heavier fruits causes them to fall from the plant when ripe. Fruits exhibiting this type of dispersal include apples, coconuts and passionfruit and those with harder shells (which often roll away from the plant to gain more distance). Gravity dispersal also allows for later transmission by water or animal.
Ballistic dispersal
Ballochory is a type of dispersal where the seed is forcefully ejected by explosive dehiscence of the fruit. Often the force that generates the explosion results from turgor pressure within the fruit or due to internal hygroscopic tensions within the fruit. Some examples of plants which disperse their seeds autochorously include: Arceuthobium spp., Cardamine hirsuta, Ecballium elaterium, Euphorbia heterophylla, Geranium spp., Impatiens spp., Sucrea spp, Raddia spp. and others. An exceptional example of ballochory is Hura crepitans—this plant is commonly called the dynamite tree due to the sound of the fruit exploding. The explosions are powerful enough to throw the seed up to 100 meters.
Witch hazel uses ballistic dispersal without explosive mechanisms by simply squeezing the seeds out at approx. 45 km/h (28 mph).
Allochory
Allochory refers to any of many types of seed dispersal where a vector or secondary agent is used to disperse seeds. These vectors may include wind, water, animals or others.
Wind
Wind dispersal (anemochory) is one of the more primitive means of dispersal. Wind dispersal can take on one of two primary forms: seeds or fruits can float on the breeze or, alternatively, they can flutter to the ground. The classic examples of these dispersal mechanisms, in the temperate northern hemisphere, include dandelions, which have a feathery pappus attached to their fruits (achenes) and can be dispersed long distances, and maples, which have winged fruits (samaras) that flutter to the ground.
An important constraint on wind dispersal is the need for abundant seed production to maximize the likelihood of a seed landing in a site suitable for germination. Some wind-dispersed plants, such as the dandelion, can adjust their morphology in order to increase or decrease the rate of diaspore detachment. There are also strong evolutionary constraints on this dispersal mechanism. For instance, Cody and Overton (1996) found that species in the Asteraceae on islands tended to have reduced dispersal capabilities (i.e., larger seed mass and smaller pappus) relative to the same species on the mainland. Also, Helonias bullata, a species of perennial herb native to the United States, evolved to utilize wind dispersal as the primary seed dispersal mechanism; however, limited wind in its habitat prevents the seeds from successfully dispersing away from its parents, resulting in clusters of population. Reliance on wind dispersal is common among many weedy or ruderal species. Unusual mechanisms of wind dispersal include tumbleweeds, where the entire plant (except for the roots) is blown by the wind. Physalis fruits, when not fully ripe, may sometimes be dispersed by wind due to the space between the fruit and the covering calyx, which acts as an air bladder.
Water
Many aquatic (water dwelling) and some terrestrial (land dwelling) species use hydrochory, or seed dispersal through water. Seeds can travel for extremely long distances, depending on the specific mode of water dispersal; this especially applies to fruits which are waterproof and float on water.
The water lily is an example of such a plant. Water lilies' flowers make a fruit that floats in the water for a while and then drops down to the bottom to take root on the floor of the pond.
The seeds of palm trees can also be dispersed by water. If they grow near oceans, the seeds can be transported by ocean currents over long distances, allowing the seeds to be dispersed as far as other continents.
Mangrove trees grow directly out of the water; when their seeds are ripe they fall from the tree and grow roots as soon as they touch any kind of soil. During low tide, they might fall in soil instead of water and start growing right where they fell. If the water level is high, however, they can be carried far away from where they fell. Mangrove trees often make little islands as dirt and detritus collect in their roots, making little bodies of land.
Animals: epi- and endozoochory
Animals can disperse plant seeds in several ways, all named zoochory. Seeds can be transported on the outside of vertebrate animals (mostly mammals), a process known as epizoochory. Plant species transported externally by animals can have a variety of adaptations for dispersal, including adhesive mucus, and a variety of hooks, spines and barbs. A typical example of an epizoochorous plant is Trifolium angustifolium, a species of Old World clover which adheres to animal fur by means of stiff hairs covering the seed. Epizoochorous plants tend to be herbaceous plants, with many representative species in the families Apiaceae and Asteraceae. However, epizoochory is a relatively rare dispersal syndrome for plants as a whole; the percentage of plant species with seeds adapted for transport on the outside of animals is estimated to be below 5%. Nevertheless, epizoochorous transport can be highly effective if the seeds attach to animals that travel widely. This form of seed dispersal has been implicated in rapid plant migration and the spread of invasive species.
Seed dispersal via ingestion and defecation by vertebrate animals (mostly birds and mammals), or endozoochory, is the dispersal mechanism for most tree species. Endozoochory is generally a coevolved mutualistic relationship in which a plant surrounds seeds with an edible, nutritious fruit as a good food resource for animals that consume it. Such plants may advertise the presence of food resource by using colour. Birds and mammals are the most important seed dispersers, but a wide variety of other animals, including turtles, fish, and insects (e.g. tree wētā and scree wētā), can transport viable seeds. The exact percentage of tree species dispersed by endozoochory varies between habitats, but can range to over 90% in some tropical rainforests. Seed dispersal by animals in tropical rainforests has received much attention, and this interaction is considered an important force shaping the ecology and evolution of vertebrate and tree populations. In the tropics, large-animal seed dispersers (such as tapirs, chimpanzees, black-and-white colobus, toucans and hornbills) may disperse large seeds that have few other seed dispersal agents. The extinction of these large frugivores from poaching and habitat loss may have negative effects on the tree populations that depend on them for seed dispersal and reduce genetic diversity among trees. Seed dispersal through endozoochory can lead to quick spread of invasive species, such as in the case of prickly acacia in Australia. A variation of endozoochory is regurgitation of seeds rather than their passage in faeces after passing through the entire digestive tract.
Seed dispersal by ants (myrmecochory) is a dispersal mechanism of many shrubs of the southern hemisphere or understorey herbs of the northern hemisphere. Seeds of myrmecochorous plants have a lipid-rich attachment called the elaiosome, which attracts ants. Ants carry such seeds into their colonies, feed the elaiosome to their larvae and discard the otherwise intact seed in an underground chamber. Myrmecochory is thus a coevolved mutualistic relationship between plants and seed-disperser ants. Myrmecochory has independently evolved at least 100 times in flowering plants and is estimated to be present in at least 11 000 species, but likely up to 23 000 (which is 9% of all species of flowering plants). Myrmecochorous plants are most frequent in the fynbos vegetation of the Cape Floristic Region of South Africa, the kwongan vegetation and other dry habitat types of Australia, dry forests and grasslands of the Mediterranean region and northern temperate forests of western Eurasia and eastern North America, where up to 30–40% of understorey herbs are myrmecochorous. Seed dispersal by ants is a mutualistic relationship and benefits both the ant and the plant.
Seed dispersal by bees (melittochory) is an unusual dispersal mechanism for a small number of tropical plants. As of 2023 it has only been documented in five plant species including Corymbia torelliana, Coussapoa asperifolia subsp. magnifolia, Zygia racemosa, Vanilla odorata, and Vanilla planifolia. The first three are tropical trees and the last two are tropical vines.
Seed predators, which include many rodents (such as squirrels) and some birds (such as jays) may also disperse seeds by hoarding the seeds in hidden caches. The seeds in caches are usually well-protected from other seed predators and if left uneaten will grow into new plants. Rodents may also disperse seeds when the presence of secondary metabolites in ripe fruits causes them to spit out certain seeds rather than consuming them. Finally, seeds may be secondarily dispersed from seeds deposited by primary animal dispersers, a process known as diplochory. For example, dung beetles are known to disperse seeds from clumps of feces in the process of collecting dung to feed their larvae.
Other types of zoochory are chiropterochory (by bats), malacochory (by molluscs, mainly terrestrial snails), ornithochory (by birds) and saurochory (by non-bird sauropsids). Zoochory can occur in more than one phase, for example through diploendozoochory, where a primary disperser (an animal that ate a seed) along with the seeds it is carrying is eaten by a predator that then carries the seed further before depositing it.
Humans
Dispersal by humans (anthropochory) used to be seen as a form of dispersal by animals. Its most widespread and intense cases account for the planting of much of the land area on the planet, through agriculture. In this case, human societies form a long-term relationship with plant species, and create conditions for their growth.
Recent research points out that human dispersers differ from animal dispersers by having a much higher mobility, based on the technical means of human transport. On the one hand, dispersal by humans also acts on smaller, regional scales and drives the dynamics of existing biological populations. On the other hand, dispersal by humans may act on large geographical scales and lead to the spread of invasive species.
Humans may disperse seeds by many various means and some surprisingly high distances have been repeatedly measured. Examples are: dispersal on human clothes (up to 250 m), on shoes (up to 5 km), or by cars (regularly ~ 250 m, single cases > 100 km). Humans can unintentionally transport seeds by car, which can carry the seeds much greater distances than other conventional methods of dispersal. Soil on cars can contain viable seeds. A study by Dunmail J. Hodkinson and Ken Thompson found that the most common seeds carried by vehicle were broadleaf plantain (Plantago major), Annual meadow grass (Poa annua), rough meadow grass (Poa trivialis), stinging nettle (Urtica dioica) and wild chamomile (Matricaria discoidea).
Deliberate seed dispersal also occurs as seed bombing. This has risks, as it may introduce genetically unsuitable plants to new environments.
Consequences
Seed dispersal has many consequences for the ecology and evolution of plants. Dispersal is necessary for species migrations, and in recent times dispersal ability is an important factor in whether or not a species transported to a new habitat by humans will become an invasive species. Dispersal is also predicted to play a major role in the origin and maintenance of species diversity. For example, myrmecochory increased the rate of diversification more than twofold in plant groups in which it has evolved, because myrmecochorous lineages contain more than twice as many species as their non-myrmecochorous sister groups. Dispersal of seeds away from the parent organism has a central role in two major theories for how biodiversity is maintained in natural ecosystems, the Janzen-Connell hypothesis and recruitment limitation. Seed dispersal is essential in allowing forest migration of flowering plants. It can be influenced by the production of different fruit morphs in plants, a phenomenon known as heterocarpy. These fruit morphs are different in size and shape and have different dispersal ranges, which allows seeds to be dispersed over varying distances and adapt to different environments. The distances of the dispersal also affect the kernel of the seed. The lowest distances of seed dispersal were found in wetlands, whereas the longest were in dry landscapes.
In addition, the speed and direction of wind are highly influential in the dispersal process and in turn the deposition patterns of floating seeds in stagnant water bodies. The transportation of seeds is led by the wind direction. This affects colonization when it is situated on the banks of a river, or to wetlands adjacent to streams relative to the given wind directions. The wind dispersal process can also affect connections between water bodies. Essentially, wind plays a larger role in the dispersal of waterborne seeds in a short period of time, days and seasons, but the ecological process allows the phenomenon to become balanced throughout a time period of several years. The time period over which the dispersal occurs is essential when considering the consequences of wind on the ecological process.
| Biology and health sciences | Ecology | Biology |
1678326 | https://en.wikipedia.org/wiki/Snout | Snout | A snout is the protruding portion of an animal's face, consisting of its nose, mouth, and jaw. In many animals, the structure is called a muzzle, rostrum, beak or proboscis. The wet furless surface around the nostrils of the nose of many mammals is called the rhinarium (colloquially this is the "cold wet snout" of some mammals). The rhinarium is often associated with a stronger sense of olfaction.
Variation
Snouts are found on many mammals in a variety of shapes. Some animals, including ursines and great cats, have box-like snouts, while others, like shrews, have pointed snouts. Pig snouts are flat and cylindrical.
Primates
Strepsirrhine primates have muzzles, as do baboons. Great apes have reduced muzzles, with the exception being human beings, whose face does not have protruding jaws nor a snout but merely a human nose.
Dogs
The muzzle begins at the stop, just below the eyes, and includes the dog's nose and mouth. In the domestic dog, most of the upper muzzle contains organs for detecting scents. The loose flaps of skin on the sides of the upper muzzle that hang to different lengths over the mouth are called 'flews'.
The muzzle is innervated by one of the twelve pairs of cranial nerves, which start in the brain and emerge through the skull to their target organs. Other destinations of these nerves are the eyeballs, teeth and tongue.
The muzzle shape of a domestic dog ranges in shape depending upon the breed, from extremely long and thin (dolichocephalic), as in the Rough Collie, to nearly nonexistent because it is so flat (extreme brachycephalic), as in the pug. Some breeds, such as many sled dogs and spitz types, have muzzles that somewhat resemble the original wolf's in size and shape, and others in the less extreme range have shortened it somewhat (mesocephalic) as in many hounds.
| Biology and health sciences | External anatomy and regions of the body | Biology |
1678438 | https://en.wikipedia.org/wiki/Temperate%20forest | Temperate forest | A temperate forest is a forest found between the tropical and boreal regions, located in the temperate zone. It is the second largest terrestrial biome, covering 25% of the world's forest area, only behind the boreal forest, which covers about 33%. These forests cover both hemispheres at latitudes ranging from 25 to 50 degrees, wrapping the planet in a belt similar to that of the boreal forest. Due to its large size spanning several continents, there are several main types: deciduous, coniferous, mixed forest, and rainforest.
Climate
The climate of a temperate forest is highly variable depending on the location of the forest. For example, Los Angeles and Vancouver, Canada are both considered to be located in a temperate zone, however, Vancouver is located in a temperate rainforest, while Los Angeles is a relatively dry tropical climate.
Types of temperate forest
Deciduous
They are found in Europe, East Asia, North America, and in some parts of South America.
Deciduous forests are composed mainly of broadleaf trees, such as maple and oak, that shed all their leaves during one season. They are typically found in three middle-latitude regions with temperate climates characterized by a winter season and year-round precipitation: eastern North America, western Eurasia and northeastern Asia.
Coniferous
Coniferous forests are composed of needle-leaved evergreen trees, such as pine or fir. Evergreen forests are typically found in regions with moderate climates. Boreal forests, however, are an exception as they are found in subarctic regions. Coniferous trees often have an advantage over broadleaf trees in harsher environments. Their leaves are typically hardier and longer lived but require more energy to grow.
Mixed
As the name implies, conifers and broadleaf trees grow in the same area. The main trees found in these forests in North America and Eurasia include fir, oak, ash, maple, birch, beech, poplar, elm and pine. Other plant species may include magnolia, prunus, holly, and rhododendron. In South America, conifer and oak species predominate. In Australia, eucalypts are the predominant trees. Hardwood evergreen trees which are widely spaced and are found in the Mediterranean region are olive, cork, oak and stone pine.
Temperate rainforest
Temperate rainforests are the wettest of all the types, and are found only in very wet coastal areas. Adding to its rarity is that most of the temperate rainforests outside protected areas have been cut down and no longer exist. Temperate rainforests can, however, still be found in some areas, including the Pacific Northwest, southern Chile, northern Turkey (along with some regions of Bulgaria and Georgia), most of Japan, and others.
Effect of human activity
Temperate forests are located in the middle latitudes where much of the planet's population is. Not only were these forests cut down to build cities (i.e. New York City, Seattle, London, Tokyo, Paris), they have also been "cut down long ago to make way for cultivation." This biome has been subject to mining, logging, hunting, pollution, deforestation and habitat loss.
| Physical sciences | Forests | Earth science |
1678816 | https://en.wikipedia.org/wiki/Zoysia | Zoysia | Zoysia (; , -, -, -) is a genus of creeping grasses widespread across much of Asia and Australia, as well as various islands in the Pacific. These species, commonly called zoysia or zoysiagrass, are found in coastal areas or grasslands. It is a popular choice for fairways and teeing areas at golf courses. The genus is named after the Slovenian botanist Karl von Zois (1756–1799).
Species
Source
Zoysia × forbesiana Traub - Japan - (Z. japonica × Z. matrella)
Zoysia × hondana Ohwi - Japan - (Z. japonica × Z. macrostachya)
Zoysia japonica Steud. - zenith zoysia, Korean lawngrass - Japan (incl Bonin Is), Korea, China, Primorye; naturalized in India, North America, etc.
Zoysia macrantha Desv. - Australia
Zoysia macrostachya, eastern america, Ryukyu Is
Zoysia matrella (L.) Merr. - Southeast Asia, Japan, China, Indian Subcontinent, New Guinea, Queensland, Micronesia; naturalized in parts of Africa, North and South America, and assorted oceanic islands
Zoysia minima (Colenso) Zotov - New Zealand
Zoysia pauciflora Mez - North Island of New Zealand
Zoysia seslerioides (Balansa) Clayton & F.R.Richardson - Vietnam
Zoysia sinica Hance Japan, Korea, eastern China, Ryukyu Is
Zoysia tenuifolia Thiele - mascarene grass, Korean velvet grass
Cultivation and uses
Because they can tolerate wide variations in temperature, sunlight, and water, zoysia are widely used for lawns in temperate climates. They are used on golf courses to create fairways and teeing grounds. Zoysia grasses stop erosion on slopes, and are excellent at repelling weeds throughout the year. They resist disease and hold up well under traffic.
The cultivar Zoysia 'Emerald' (Emerald Zoysia), a hybrid between Z. japonica and Z. tenuifolia, is particularly popular.
Some types of zoysia are available commercially as sod in some areas. In typical savanna climates with warm wet and dry seasons, such as southern Florida, zoysia grasses grow during the warm-wet summer and are dormant in the drier, cooler winter months. They are popular because of their fine texture, soft feel, and low growth habit. They can form dense mats and even mounds that grow over low features. In contrast to St. Augustine grass, they generally require less fertilization and are less vulnerable to insect and fungus damage, depending on environmental conditions. Zoysia is a native of Japan and Korea, which makes a cushion-like surface or turf. Its water requirement is high. It grows slowly and frequent mowing is not required. For best appearance, turf experts recommend reel blade mowers for zoysia.
| Biology and health sciences | Poales | Plants |
1679935 | https://en.wikipedia.org/wiki/Electric%20power%20quality | Electric power quality | Electric power quality is the degree to which the voltage, frequency, and waveform of a power supply system conform to established specifications. Good power quality can be defined as a steady supply voltage that stays within the prescribed range, steady AC frequency close to the rated value, and smooth voltage curve waveform (which resembles a sine wave). In general, it is useful to consider power quality as the compatibility between what comes out of an electric outlet and the load that is plugged into it. The term is used to describe electric power that drives an electrical load and the load's ability to function properly. Without the proper power, an electrical device (or load) may malfunction, fail prematurely or not operate at all. There are many ways in which electric power can be of poor quality, and many more causes of such poor quality power.
The electric power industry comprises electricity generation (AC power), electric power transmission and ultimately electric power distribution to an electricity meter located at the premises of the end user of the electric power. The electricity then moves through the wiring system of the end user until it reaches the load. The complexity of the system to move electric energy from the point of production to the point of consumption combined with variations in weather, generation, demand and other factors provide many opportunities for the quality of supply to be compromised.
While "power quality" is a convenient term for many, it is the quality of the voltage—rather than power or electric current—that is actually described by the term. Power is simply the flow of energy, and the current demanded by a load is largely uncontrollable.
Introduction
The quality of electrical power may be described as a set of values of parameters, such as:
Continuity of service (whether the electrical power is subject to voltage drops or overages below or above a threshold level thereby causing blackouts or brownouts)
Variation in voltage magnitude (see below)
Transient voltages and currents
Harmonic content in the waveforms for AC power
It is often useful to think of power quality as a compatibility problem: is the equipment connected to the grid compatible with the events on the grid, and is the power delivered by the grid, including the events, compatible with the equipment that is connected? Compatibility problems always have at least two solutions: in this case, either clean up the power, or make the equipment more resilient.
The tolerance of data-processing equipment to voltage variations is often characterized by the CBEMA curve, which give the duration and magnitude of voltage variations that can be tolerated.
Ideally, AC voltage is supplied by a utility as sinusoidal having an amplitude and frequency given by national standards (in the case of mains) or system specifications (in the case of a power feed not directly attached to the mains) with an impedance of zero ohms at all frequencies.
Deviations
No real-life power source is ideal and generally can deviate in at least the following ways:
Voltage
Variations in the peak or root mean square (RMS) voltage are both important to different types of equipment.
When the RMS voltage exceeds the nominal voltage by 10 to 80% for 0.5 cycle to 1 minute, the event is called a "swell".
A "dip" (in British English) or a "sag" (in American English the two terms are equivalent) is the opposite situation: the RMS voltage is below the nominal voltage by 10 to 90% for 0.5 cycle to 1 minute.
Random or repetitive variations in the RMS voltage between 90 and 110% of nominal can produce a phenomenon known as "flicker" in lighting equipment. Flicker is rapid visible changes of light level. Definition of the characteristics of voltage fluctuations that produce objectionable light flicker has been the subject of ongoing research.
Abrupt, very brief increases in voltage, called "spikes", "impulses", or "surges", generally caused by large inductive loads being turned ON, or more severely by lightning.
"Undervoltage" occurs when the nominal voltage drops below 90% for more than 1 minute. The term "brownout" is an apt description for voltage drops somewhere between full power (bright lights) and a blackout (no power – no light). It comes from the noticeable to significant dimming of regular incandescent lights, during system faults or overloading etc., when insufficient power is available to achieve full brightness in (usually) domestic lighting. This term is in common usage has no formal definition but is commonly used to describe a reduction in system voltage by the utility or system operator to decrease demand or to increase system operating margins.
"Overvoltage" occurs when the nominal voltage rises above 110% for more than 1 minute.
Frequency
Variations in the frequency.
Nonzero low-frequency impedance (when a load draws more power, the voltage drops).
Nonzero high-frequency impedance (when a load demands a large amount of current, then suddenly stops demanding it, there will be a dip or spike in the voltage due to the inductances in the power supply line).
Variations in the wave shape – usually described as harmonics at lower frequencies (usually less than 3 kHz) and described as Common Mode Distortion or Interharmonics at higher frequencies.
Waveform
The oscillation of voltage and current ideally follows the form of a sine or cosine function, however it can alter due to imperfections in the generators or loads.
Typically, generators cause voltage distortions and loads cause current distortions. These distortions occur as oscillations more rapid than the nominal frequency, and are referred to as harmonics.
The relative contribution of harmonics to the distortion of the ideal waveform is called total harmonic distortion (THD).
Low harmonic content in a waveform is ideal because harmonics can cause vibrations, buzzing, equipment distortions, and losses and overheating in transformers.
Each of these power quality problems has a different cause. Some problems are a result of the shared infrastructure. For example, a fault on the network may cause a dip that will affect some customers; the higher the level of the fault, the greater the number affected. A problem on one customer’s site may cause a transient that affects all other customers on the same subsystem. Problems, such as harmonics, arise within the customer’s own installation and may propagate onto the network and affect other customers. Harmonic problems can be dealt with by a combination of good design practice and well proven reduction equipment.
Power conditioning
Power conditioning is modifying the power to improve its quality.
An uninterruptible power supply (UPS) can be used to switch off of mains power if there is a transient (temporary) condition on the line. However, cheaper UPS units create poor-quality power themselves, akin to imposing a higher-frequency and lower-amplitude square wave atop the sine wave. High-quality UPS units utilize a double conversion topology which breaks down incoming AC power into DC, charges the batteries, then remanufactures an AC sine wave. This remanufactured sine wave is of higher quality than the original AC power feed.
A dynamic voltage regulator (DVR) and static synchronous series compensator (SSSC) are utilized for series voltage-sag compensation.
A surge protector or simple capacitor or varistor can protect against most overvoltage conditions, while a lightning arrester protects against severe spikes.
Electronic filters can remove harmonics.
Smart grids and power quality
Modern systems use sensors called phasor measurement units (PMU) distributed throughout their network to monitor power quality and in some cases respond automatically to them. Using such smart grids features of rapid sensing and automated self healing of anomalies in the network promises to bring higher quality power and less downtime while simultaneously supporting power from intermittent power sources and distributed generation, which would if unchecked degrade power quality.
Compression algorithm
A power quality compression algorithm is an algorithm used in the analysis of power quality. To provide high quality electric power service, it is essential to monitor the quality of the electric signals also termed as power quality (PQ) at different locations along an electrical power network. Electrical utilities carefully monitor waveforms and currents at various network locations constantly, to understand what lead up to any unforeseen events such as a power outage and blackouts. This is particularly critical at sites where the environment and public safety are at risk (institutions such as hospitals, sewage treatment plants, mines, etc.).
Challenges
Engineers use many kinds of meters, that read and display electrical power waveforms and calculate parameters of the waveforms. They measure, for example:
current and voltage RMS
phase relationship between waveforms of a multi-phase signal
power factor
frequency
total harmonic distortion (THD)
active power (kW)
reactive power (kVAr)
apparent power (kVA)
active energy (kWh)
reactive energy (kVArh)
apparent energy (kVAh)
and many more
In order to sufficiently monitor unforeseen events, Ribeiro et al. explains that it is not enough to display these parameters, but to also capture voltage waveform data at all times. This is impracticable due to the large amount of data involved, causing what is known the “bottle effect”. For instance, at a sampling rate of 32 samples per cycle, 1,920 samples are collected per second. For three-phase meters that measure both voltage and current waveforms, the data is 6–8 times as much. More practical solutions developed in recent years store data only when an event occurs (for example, when high levels of power system harmonics are detected) or alternatively to store the RMS value of the electrical signals. This data, however, is not always sufficient to determine the exact nature of problems.
Raw data compression
Nisenblat et al. proposes the idea of power quality compression algorithm (similar to lossy compression methods) that enables meters to continuously store the waveform of one or more power signals, regardless whether or not an event of interest was identified. This algorithm referred to as PQZip empowers a processor with a memory that is sufficient to store the waveform, under normal power conditions, over a long period of time, of at least a month, two months or even a year. The compression is performed in real time, as the signals are acquired; it calculates a compression decision before all the compressed data is received. For instance should one parameter remain constant, and various others fluctuate, the compression decision retains only what is relevant from the constant data, and retains all the fluctuation data. It then decomposes the waveform of the power signal of numerous components, over various periods of the waveform. It concludes the process by compressing the values of at least some of these components over different periods, separately. This real time compression algorithm, performed independent of the sampling, prevents data gaps and has a typical 1000:1 compression ratio.
Aggregated data compression
A typical function of a power analyzer is generation of data archive aggregated over given interval. Most typically 10 minute or 1 minute interval is used as specified by the IEC/IEEE PQ standards. A significant archive sizes are created during an operation of such instrument. As Kraus et al. have demonstrated the compression ratio on such archives using Lempel–Ziv–Markov chain algorithm, bzip or other similar lossless compression algorithms can be significant. By using prediction and modeling on the stored time series in the actual power quality archive the efficiency of post processing compression is usually further improved. This combination of simplistic techniques implies savings in both data storage and data acquisition processes.
Standards
The quality of electricity supplied is set forth in international standards and their local derivatives, adopted by different countries:
EN50160 is the European standard for power quality, setting the acceptable limits of distortion for the different parameters defining voltage in AC power.
IEEE-519 is the North American guideline for power systems. It is defined as "recommended practice" and, unlike EN50160, this guideline refers to current distortion as well as voltage.
IEC 61000-4-30 is the standard defining methods for monitoring power quality. Edition 3 (2015) includes current measurements, unlike earlier editions which related to voltage measurement alone.
| Technology | Concepts | null |
1680145 | https://en.wikipedia.org/wiki/Strain%20%28chemistry%29 | Strain (chemistry) | In chemistry, a molecule experiences strain when its chemical structure undergoes some stress which raises its internal energy in comparison to a strain-free reference compound. The internal energy of a molecule consists of all the energy stored within it. A strained molecule has an additional amount of internal energy which an unstrained molecule does not. This extra internal energy, or strain energy, can be likened to a compressed spring. Much like a compressed spring must be held in place to prevent release of its potential energy, a molecule can be held in an energetically unfavorable conformation by the bonds within that molecule. Without the bonds holding the conformation in place, the strain energy would be released.
Summary
Thermodynamics
The equilibrium of two molecular conformations is determined by the difference in Gibbs free energy of the two conformations. From this energy difference, the equilibrium constant for the two conformations can be determined.
If there is a decrease in Gibbs free energy from one state to another, this transformation is spontaneous and the lower energy state is more stable. A highly strained, higher energy molecular conformation will spontaneously convert to the lower energy molecular conformation.
Enthalpy and entropy are related to Gibbs free energy through the equation (at a constant temperature):
Enthalpy is typically the more important thermodynamic function for determining a more stable molecular conformation. While there are different types of strain, the strain energy associated with all of them is due to the weakening of bonds within the molecule. Since enthalpy is usually more important, entropy can often be ignored. This isn't always the case; if the difference in enthalpy is small, entropy can have a larger effect on the equilibrium. For example, n-butane has two possible conformations, anti and gauche. The anti conformation is more stable by 0.9 kcal mol−1. We would expect that butane is roughly 82% anti and 18% gauche at room temperature. However, there are two possible gauche conformations and only one anti conformation. Therefore, entropy makes a contribution of 0.4 kcal in favor of the gauche conformation. We find that the actual conformational distribution of butane is 70% anti and 30% gauche at room temperature.
Determining molecular strain
The standard heat of formation (ΔfH°) of a compound is described as the enthalpy change when the compound is formed from its separated elements. When the heat of formation for a compound is different from either a prediction or a reference compound, this difference can often be attributed to strain. For example, ΔfH° for cyclohexane is -29.9 kcal mol−1 while ΔfH° for methylcyclopentane is -25.5 kcal mol−1. Despite having the same atoms and number of bonds, methylcyclopentane is higher in energy than cyclohexane. This difference in energy can be attributed to the ring strain of a five-membered ring which is absent in cyclohexane. Experimentally, strain energy is often determined using heats of combustion which is typically an easy experiment to perform.
Determining the strain energy within a molecule requires knowledge of the expected internal energy without the strain. There are two ways do this. First, one could compare to a similar compound that lacks strain, such as in the previous methylcyclohexane example. Unfortunately, it can often be difficult to obtain a suitable compound. An alternative is to use Benson group increment theory. As long as suitable group increments are available for the atoms within a compound, a prediction of ΔfH° can be made. If the experimental ΔfH° differs from the predicted ΔfH°, this difference in energy can be attributed to strain energy.
Kinds of strain
Van der Waals strain
Van der Waals strain, or steric strain, occurs when atoms are forced to get closer than their Van der Waals radii allow. Specifically, Van der Waals strain is considered a form of strain where the interacting atoms are at least four bonds away from each other. The amount on steric strain in similar molecules is dependent on the size of the interacting groups; bulky tert-butyl groups take up much more space than methyl groups and often experience greater steric interactions.
The effects of steric strain in the reaction of trialkylamines and trimethylboron were studied by Nobel laureate Herbert C. Brown et al. They found that as the size of the alkyl groups on the amine were increased, the equilibrium constant decreased as well. The shift in equilibrium was attributed to steric strain between the alkyl groups of the amine and the methyl groups on boron.
Syn-pentane strain
There are situations where seemingly identical conformations are not equal in strain energy. Syn-pentane strain is an example of this situation. There are two different ways to put both of the bonds the central in n-pentane into a gauche conformation, one of which is 3 kcal mol−1 higher in energy than the other. When the two methyl-substituted bonds are rotated from anti to gauche in opposite directions, the molecule assumes a cyclopentane-like conformation where the two terminal methyl groups are brought into proximity. If the bonds are rotated in the same direction, this doesn't occur. The steric strain between the two terminal methyl groups accounts for the difference in energy between the two similar, yet very different conformations.
Allylic strain
Allylic strain, or A1,3 strain is closely associated to syn-pentane strain. An example of allylic strain can be seen in the compound 2-pentene. It's possible for the ethyl substituent of the olefin to rotate such that the terminal methyl group is brought near to the vicinal methyl group of the olefin. These types of compounds usually take a more linear conformation to avoid the steric strain between the substituents.
1,3-diaxial strain
1,3-diaxial strain is another form of strain similar to syn-pentane. In this case, the strain occurs due to steric interactions between a substituent of a cyclohexane ring ('α') and gauche interactions between the alpha substituent and both methylene carbons two bonds away from the substituent in question (hence, 1,3-diaxial interactions). When the substituent is axial, it is brought near to an axial gamma hydrogen. The amount of strain is largely dependent on the size of the substituent and can be relieved by forming into the major chair conformation placing the substituent in an equatorial position. The difference in energy between conformations is called the A value and is well known for many different substituents. The A value is a thermodynamic parameter and was originally measured along with other methods using the Gibbs free energy equation and, for example, the Meerwein–Ponndorf–Verley reduction/Oppenauer oxidation equilibrium for the measurement of axial versus equatorial values of cyclohexanone/cyclohexanol (0.7 kcal mol−1).
Torsional strain
Torsional strain is the resistance to bond twisting. In cyclic molecules, it is also called Pitzer strain.
Torsional strain occurs when atoms separated by three bonds are placed in an eclipsed conformation instead of the more stable staggered conformation. The barrier of rotation between staggered conformations of ethane is approximately 2.9 kcal mol−1. It was initially believed that the barrier to rotation was due to steric interactions between vicinal hydrogens, but the Van der Waals radius of hydrogen is too small for this to be the case. Recent research has shown that the staggered conformation may be more stable due to a hyperconjugative effect. Rotation away from the staggered conformation interrupts this stabilizing force.
More complex molecules, such as butane, have more than one possible staggered conformation. The anti conformation of butane is approximately 0.9 kcal mol−1 (3.8 kJ mol−1) more stable than the gauche conformation. Both of these staggered conformations are much more stable than the eclipsed conformations. Instead of a hyperconjugative effect, such as that in ethane, the strain energy in butane is due to both steric interactions between methyl groups and angle strain caused by these interactions.
Ring strain
According to the VSEPR theory of molecular bonding, the preferred geometry of a molecule is that in which both bonding and non-bonding electrons are as far apart as possible. In molecules, it is quite common for these angles to be somewhat compressed or expanded compared to their optimal value. This strain is referred to as angle strain, or Baeyer strain. The simplest examples of angle strain are small cycloalkanes such as cyclopropane and cyclobutane, which are discussed below. Furthermore, there is often eclipsing or Pitzer strain in cyclic systems. These and possible transannular interactions were summarized early by H.C. Brown as internal strain, or I-Strain. Molecular mechanics or force field approaches allow to calculate such strain contributions, which then can be correlated e.g. with reaction rates or equilibria. Many reactions of alicyclic compounds, including equilibria, redox and solvolysis reactions, which all are characterized by transition between sp2 and sp3 state at the reaction center, correlate with corresponding strain energy differences SI (sp2 -sp3). The data reflect mainly the unfavourable vicinal angles in medium rings, as illustrated by the severe increase of ketone reduction rates with increasing SI (Figure 1). Another example is the solvolysis of bridgehead tosylates with steric energy differences between corresponding bromide derivatives (sp3) and the carbenium ion as sp2- model for the transition state. (Figure 2)
In principle, angle strain can occur in acyclic compounds, but the phenomenon is rare.
Small rings
Cyclohexane is considered a benchmark in determining ring strain in cycloalkanes and it is commonly accepted that there is little to no strain energy. In comparison, smaller cycloalkanes are much higher in energy due to increased strain. Cyclopropane is analogous to a triangle and thus has bond angles of 60°, much lower than the preferred 109.5° of an sp3 hybridized carbon. Furthermore, the hydrogens in cyclopropane are eclipsed. Cyclobutane experiences similar strain, with bond angles of approximately 88° (it isn't completely planar) and eclipsed hydrogens. The strain energy of cyclopropane and cyclobutane are 27.5 and 26.3 kcal mol−1, respectively. Cyclopentane experiences much less strain, mainly due to torsional strain from eclipsed hydrogens: its preferred conformations interconvert by a process called pseudorotation.
Ring strain can be considerably higher in bicyclic systems. For example, bicyclobutane, C4H6, is noted for being one of the most strained compounds that is isolatable on a large scale; its strain energy is estimated at 63.9 kcal mol−1 (267 kJ mol−1).
Transannular strain
Medium-sized rings (7–13 carbons) experience more strain energy than cyclohexane, due mostly to deviation from ideal vicinal angles, or Pitzer strain. Molecular mechanics calculations indicate that transannular strain, also known as Prelog strain, does not play an essential role. Transannular reactions however, such as 1,5-shifts in cyclooctane substitution reactions, are well known.
Bicyclic systems
The amount of strain energy in bicyclic systems is commonly the sum of the strain energy in each individual ring. This isn't always the case, as sometimes the fusion of rings induces some extra strain.
Strain in allosteric systems
In synthetic allosteric systems there are typically two or more conformers with stability differences due to strain contributions. Positive cooperativity for example results from increased binding of a substrate A to a conformer C2 which is produced by binding of an effector molecule E. If the conformer C2 has a similar stability as another equilibrating conformer C1 a fit induced by the substrate A will lead to binding of A to C2 also in absence of the effector E. Only if the stability of the conformer C2 is significantly smaller, meaning that in absence of an effector E the population of C2 is much smaller than that of C1, the ratio K2/K1 which measures the efficiency of the allosteric signal will increase. The ratio K2/K1 can be related directly to the strain energy difference between the conformers C1 and C2; if it is small higher concentrations of A will directly bind to C2 and make the effector E inefficient. In addition, the response time of such allosteric switches depends on the strain of the conformer interconversion transitions state.
| Physical sciences | Stereochemistry | Chemistry |
1680877 | https://en.wikipedia.org/wiki/Air%20filter | Air filter | A particulate air filter is a device composed of fibrous, or porous materials which removes particulates such as smoke, dust, pollen, mold, viruses and bacteria from the air. Filters containing an adsorbent or catalyst such as charcoal (carbon) may also remove odors and gaseous pollutants such as volatile organic compounds or ozone. Air filters are used in applications where air quality is important, notably in building ventilation systems and in engines.
Some buildings, as well as aircraft and other human-made environments (e.g., satellites, and Space Shuttles) use foam, pleated paper, or spun fiberglass filter elements. Another method, air ionizers, use fibers or elements with a static electric charge, which attract dust particles. The air intakes of internal combustion engines and air compressors tend to use either paper, foam, or cotton filters. Oil bath filters have fallen out of favour aside from niche uses. The technology of air intake filters of gas turbines has improved significantly in recent years, due to improvements in the aerodynamics and fluid dynamics of the air-compressor part of the gas turbines.
HEPA filters
High efficiency particulate arrester (HEPA), originally called high-efficiency particulate absorber but also sometimes called high-efficiency particulate arresting or high-efficiency particulate arrestance, is a type of air filter. Filters meeting the HEPA standard have many applications, including use in clean rooms for IC fabrication, medical facilities, automobiles, aircraft and homes. The filter must satisfy certain standards of efficiency such as those set by the United States Department of Energy (DOE).
Varying standards define what qualifies as a HEPA filter. The two most common standards require that an air filter must remove (from the air that passes through) 99.95% (European Standard) or 99.97% (ASME standard) of particles that have a size greater than or equal to 0.3 μm.
Automotive cabin air filters
The cabin air filter, also known in the United Kingdom as a pollen filter, is typically a pleated-paper filter that is placed in the outside-air intake for the vehicle's passenger compartment. Some of these filters are rectangular and similar in shape to the engine air filter. Others are uniquely shaped to fit the available space of particular vehicles' outside-air intakes.
The first automaker to include a disposable filter to keep the ventilation system clean was the Nash Motors "Weather Eye", introduced in 1940.
A reusable heater core filter was available as an optional accessory on Studebaker models beginning in 1959, including Studebaker Lark automobiles (1959-1966), Studebaker Gran Turismo Hawk automobiles (1962-1964) and Studebaker Champ trucks (1960-1964). The filter was an aluminum frame containing an aluminum mesh and was located directly above the heater core. The filter was removed and installed from the engine compartment through a slot in the firewall. A long, thin rubber seal plugged the slot when the filter was installed. The filter could be vacuumed and washed prior to installation.
Clogged or dirty cabin air filters can significantly reduce airflow from the cabin vents, as well as introduce allergens into the cabin air stream. Since the cabin air temperature depends upon the flow rate of the air passing through the heater core, the evaporator, or both, clogged filters can greatly reduce the effectiveness and performance of the vehicle's air conditioning and heating systems.
Some cabin air filters perform poorly, and some cabin air filter manufacturers do not print a minimum efficiency reporting value (MERV) filter rating on their cabin air filters.
Internal combustion engine air filters
The combustion air filter prevents abrasive particulate matter from entering the engine's cylinders, where it would cause mechanical wear and oil contamination.
Most fuel injected vehicles use a pleated paper filter element in the form of a flat panel. This filter is usually placed inside a plastic box connected to the throttle body with duct work. Older vehicles that use carburetors or throttle body fuel injection typically use a cylindrical air filter, usually between and in diameter. This is positioned above or beside the carburetor or throttle body, usually in a metal or plastic container which may incorporate ducting to provide cool and/or warm inlet air, and secured with a metal or plastic lid. The overall unit (filter and housing together) is called the air cleaner.
Paper
Pleated paper filter elements are the nearly exclusive choice for automobile engine air cleaners, because they are efficient, easy to service, and cost-effective. The "paper" term is somewhat misleading, as the filter media are considerably different from papers used for writing or packaging, etc. There is a persistent belief among tuners, fomented by advertising for aftermarket non-paper replacement filters, that paper filters flow poorly and thus restrict engine performance. In fact, as long as a pleated-paper filter is sized appropriately for the airflow volumes encountered in a particular application, such filters present only trivial restriction to flow until the filter has become significantly clogged with dirt. Construction equipment engines also use this. The reason is that the paper is bent in zig-zag shape, and the total area of the paper is very large, in the range of 50 times of the air opening.
Foam
Oil-wetted polyurethane foam elements are used in some aftermarket replacement automobile air filters. Foam was in the past widely used in air cleaners on small engines on lawnmowers and other power equipment, but automotive-type paper filter elements have largely supplanted oil-wetted foam in these applications. Foam filters are still commonly used on air compressors for air tools up to . Depending on the grade and thickness of foam employed, an oil-wetted foam filter element can offer minimal airflow restriction or very high dirt capacity, the latter property making foam filters a popular choice in off-road rallying and other motorsport applications where high levels of dust will be encountered. Due to the way dust is captured on foam filters, large amounts may be trapped without measurable change in airflow restriction.
Cotton
Oiled cotton gauze is employed in a growing number of aftermarket automotive air filters marketed as high-performance items. In the past, cotton gauze saw limited use in original-equipment automotive air filters. However, since the introduction of the Abarth SS versions, the Fiat subsidiary supplies cotton gauze air filters as OE filters.
Stainless steel
Stainless steel mesh is another example of medium which allow more air to pass through.
Stainless steel mesh comes with different mesh counts, offering different filtration standards.
In an extreme modified engine lacking in space for a cone based air filter, some will opt to install a simple stainless steel mesh over the turbo to ensure no particles enter the engine via the turbo.
Oil bath
An oil bath air cleaner consists of a sump containing a pool of oil, and an insert which is filled with fiber, mesh, foam, or another coarse filter media. The cleaner removes particles by adhering them to the oil-soaked filter media rather than traditional filtration, the openings in the filter media are much larger than the particles that are to be filtered. When the cleaner is assembled, the media-containing body of the insert sits a short distance above the surface of the oil pool. The rim of the insert overlaps the rim of the sump. This arrangement forms a labyrinthine path through which the air must travel in a series of U-turns: up through the gap between the rims of the insert and the sump, down through the gap between the outer wall of the insert and the inner wall of the sump, and up through the filter media in the body of the insert. This U-turn takes the air at high velocity across the surface of the oil pool. Larger and heavier dust and dirt particles in the air cannot make the turn due to their inertia, so they fall into the oil and settle to the bottom of the base bowl. Lighter and smaller particles stick to the filtration media in the insert, which is wetted by oil droplets aspirated there into by normal airflow. The constant aspiration of oil onto the filter media slowly carries most of the finer trapped particles downward and the oil drips back into the reservoir where the particles accumulate.
Oil bath air cleaners were very widely used in automotive and small engine applications until the widespread industry adoption of the paper filter in the early 1960s. Such cleaners are still used in off-road equipment where very high levels of dust are encountered, for oil bath air cleaners can sequester a great deal of dirt relative to their overall size without loss of filtration efficiency or airflow. However, the liquid oil makes cleaning and servicing such air cleaners messy and inconvenient, they must be relatively large to avoid excessive restriction at high airflow rates, and they tend to increase exhaust emissions of unburned hydrocarbons due to oil aspiration when used on spark-ignition engines.
Water bath
In the early 20th century (about 1900 to 1930), water bath air cleaners were used in some applications (cars, trucks, tractors, and portable and stationary engines). They worked on roughly the same principles as oil bath air cleaners. For example, the original Fordson tractor had a water bath air cleaner. By the 1940s, oil bath designs had displaced water bath designs because of better filtering performance.
Bulk solids handling filters
Bulk solids handling involves the transport of solids (mechanical transport, pneumatic transport) which may be in a powder form. Many industries are handling bulk solids (mining industries, chemical industries, food industries) which requires the treatment of air streams escaping the process so that fine particles are not emitted, for regulatory reasons or economical reasons (loss of materials). As a consequence, air filters are positioned at many places in the process, especially at the reception of pneumatic conveying lines where the quantity of air is important and the load in fine particle quite important. Filters can also be placed at any point of air exchange in the process to avoid that pollutants enter the process, which is particularly true in pharmaceuticals and food industries. The physical phenomena involved in catching particles with a filter are mainly inertial and diffusional
Filter classes
Under European normalization standards EN 779, the following filter classes were recognized:
European standard EN 779, on which the above table is based, remained in effect from 2012 to mid-2018, when it was replaced by ISO 16890.
| Technology | Food, water and health | null |
11312166 | https://en.wikipedia.org/wiki/Unionida | Unionida | Unionida is a monophyletic order of freshwater mussels, aquatic bivalve molluscs. The order includes most of the larger freshwater mussels, including the freshwater pearl mussels. The most common families are the Unionidae and the Margaritiferidae. All have in common a larval stage that is temporarily parasitic on fish, nacreous shells, high in organic matter, that may crack upon drying out, and siphons too short to permit the animal to live deeply buried in sediment.
Morphology
The shells of these mussels are variable in shape, but usually equivalve and elongate. They have solid, nacreous valves with a pearly interior, radial sculpture, and an entire pallial line.
Evolutionary history
Although some fossil freshwater bivalves from the Carboniferous and Permian periods have been suggested to be unionids, other authors have suggested that they are likely to be unrelated, due to lacking the internal nacreous layer thought to have been present in the last common ancestor of all unionids and present in their closest marine relatives, the Trigoniida. The oldest unambiguous unionids are known from the Triassic period, with the oldest commonly cited examples being from the Late Triassic Chinle Formation and Newark Supergroup of North America, though possibly older examples are known from the Middle Triassic of Tanzania and Zambia.
Distribution
Families, genera, and species in the order Unionida are found on six continents, where they are restricted exclusively to freshwater rivers, streams, creeks and some lakes. There are approximately 900 species worldwide. Around 300 species of these freshwater mussels are endemic to North America.
Unlike other bivalve orders, Unionida has no marine species, although one species (Glebula rotundata) tolerates brackish water. This widespread trait and its global distribution suggests the group has inhabited freshwater throughout its geologic history.
Life habits
Unionida burrow into the substrate in clean, fast flowing freshwater rivers, streams and creeks, with their posterior margins exposed. They pump water through the incurrent aperture, obtaining oxygen and filtering food from the water column. Freshwater mussels are some of the longest-living invertebrates in existence. These clams have, like all bivalve mollusks, a shell consisting of two parts that are hinged together, which can be closed to protect the animal's soft body within. Like all mollusks, the freshwater mussels have a muscular "foot", which enables the mussel to move slowly and bury itself within the bottom substrate of its freshwater habitat.
Reproduction
Unionida have a unique and complex life cycle involving parasitic larvae. This larval form used to be described as "parasitic worms" on the fish host, however, the larvae are not "worms" and do not harm fish under normal circumstances. Most of these freshwater mussel species have separate sexes (although some species, such as Elliptio complanata, are known to be hermaphroditic). The sperm is ejected from the mantle cavity through the male's excurrent aperture and taken into the female's mantle cavity through the incurrent aperture. Fertilised eggs move from the gonads to the gills (marsupia) where they further ripen and metamorph into glochidia, the first larval stage. Mature glochidia are released by the female and then attach to the gills, fins or skin of a host fish. Typically, the freshwater mussel larvae (glochidia) have hooks, which enable the individual to attach itself to fish. Some freshwater mussels release their glochidia in mucilaginous packets called conglutinates. The conglutinate has a sticky filament that allows it to adhere to the substrate so it is not washed away. There is also an even more specialized way of dispersal known as a super-conglutinate. The super-conglutinate resembles an aquatic fly larva or a fish egg, complete with a dark area that looks like an eyespot, and it is appetizing to fish. When a fish consumes it, it breaks up, releasing the glochidia. Mussels that produce conglutinates and super-conglutinates are often gill parasites, the glochidia attaching to the fish gills to continue their development into juveniles. A cyst is quickly formed around the glochidia, and they stay on the fish for several weeks or months before they fall off as juvenile freshwater mussels which then bury themselves in the sediment. This unique life cycle allows Unionida freshwater mussels to move upstream with the fish host species.
Conservation issues and endangered species status
Many of these freshwater mussel species face conservation issues due to habitat degradation and in some cases due to over-exploitation for the freshwater pearl industry, and for the nacre of their shells, which was used in button manufacturing.
Of the North American Unionida about 70% are either extinct (21 species), endangered (77 species), threatened (43 species) or are listed as species of special concern (72 species).
Commercial significance
These bivalve mollusks were heavily exploited for freshwater pearls, and for their nacre which was used in the button manufacturing industry in the late 19th and early 20th centuries. The effects of heavy fishing for freshwater mussels in North America in for use in manufacturing buttons put many of these species close to extinction.
Freshwater pearl industry
The "pearl rush" in North America occurred in the mid to late 1800s as people could easily find freshwater mussels in rivers and streams by "pollywogging" for mussels, some of which had freshwater pearls which they could sell for a significant price. The art of "pollywogging" involves shuffling one's feet in the mud feeling around for freshwater mussels. Because this was relatively easy to do, and an easy way to make money from freshwater selling pearls, this period has been euphemistically called the "pearl rush", and some historians have compared it to the gold rush in California. A formal freshwater mussel fishing industry was established in the mid-1850s to take advantage of this natural resource. The "pearl rush" to find freshwater pearls became so intense in some rivers that millions of freshwater mussels were killed in a few years. In some rivers and streams entire freshwater mussel beds were completely eliminated. Although the negative impact of the "pearl rush" on freshwater mussel populations was significant, in the cold light of history it was relatively minor compared to the over fishing that took place just a few years later with the "pearl" button industry.
Freshwater pearls from North America come from freshwater mussels primarily in the family Unionidae. About 20 different species of Unionidae are commercially harvested for pearls. The common names of the most prolific pearl-bearing species include: the butterfly, ebony, elephant ear, heelsplitter, mapleleaf, three-ridge pigtoe, pimple back, pistol grip, and washboard. While white is the most common color, freshwater pearls are valued in part for their wide range of lustrous colors, including: blue, bronze, brown, copper, cream, green, lavender, pink, purple, red, salmon, silvery white, white, and yellow. The different colors of freshwater pearls are primarily a function of which species of freshwater mussel they were formed in, although various factors including position of the pearl nucleus in the shell, water quality, and species type all affect the color of the freshwater pearl.
With the decline in the numbers of native freshwater mussels in North America people began to culture freshwater pearls; this became a big industry in Japan. Natural freshwater pearls are rarely perfectly round, more often than not freshwater pearls are naturally shaped as baroque, slug, or wing shaped. Round pearls are sought after as more desirable for use in jewelry. The shape of the "seed" or nucleus of the freshwater pearl, and the position of the "seed" in the mussel determines the ultimate shape the cultured pearl will take, hence with careful advanced planning cultured pearls can be made round. Cultured pearls have a similar color to natural pearls as the nacre is laid down by the mantle of the freshwater mussel, and thus the color of the pearl may be species specific. Exportation of freshwater mussels for the use in the Japanese cultured pearl industry has supported the North American freshwater mussel fisheries since the late 1950s. The mother of pearl (or nacre) from exported freshwater mussels are used to make a bead nucleus which is placed in a living animal to form a pearl. In the 1990s, the United States exported $50 million worth of freshwater mussel shells to Japan. Exports of freshwater mussel shells declined so that by 2002 the annual revenue of freshwater mussel exportation to Japan had dropped to $35 million. By 1993 in the United States 31 different states were still reporting production of freshwater pearls and export of freshwater mussel shells, including: Alabama, Arkansas, Georgia, Illinois, Indiana, Iowa, Kentucky, Louisiana, Minnesota, Missouri, Nebraska, North Dakota, Oklahoma, South Carolina, South Dakota, Tennessee, Texas, and Wisconsin. To this date the bulk of the freshwater mussel shell and freshwater pearl production comes from Alabama, Arkansas, Louisiana, and Tennessee.
In 1963 the first experimental United States freshwater mussel cultured pearl farm was established in Tennessee by John Latendresse, who is widely proclaimed as "the father of the U.S. cultured freshwater pearl industry." Over the course of nearly 30 years, John Latendresse devoted his money, time and effort to research and develop the cultured freshwater pearl industry in the United States. There are currently six freshwater cultured pearl farms in Tennessee and one in California to support the increasing popularity and demand of freshwater pearl jewelry with consumers in the United States.
Button manufacturing industry
The North American button industry began with a German craftsman named John Boepple, who had made buttons from seashells, horns and antlers in his native country. John Boepple immigrated to the United States in 1887 and found that there were vast beds of thick freshwater mussel shells in Muscatine, Iowa, which he determined were perfect for making "pearl buttons". By 1891, Boepple had set up a shop and was in business as a craftsman making buttons. John Boepple's buttons became popular locally and to ward off competition he was very protective of the secrets of his trade. Since freshwater mussels were so common and the profit potential in making "pearl" buttons was so high, some of Boepple's staff who knew his techniques were "recruited" by other businessmen to start competing businesses. Within a few years there were button factories along the length of the Mississippi River.
The need to "catch" freshwater mussels for the "pearl" button industry spurred the invention of tools to make the job easier than "pollywogging" with bare feet. In 1897 inventive mussel fishermen bent steel bars into wide open hooks which they called "brail hooks" or "crow foots" and used them to try to catch or trap the freshwater mussels. The process was simple: several of these "brail hooks" were attached to a long wooden bar with lengths of rope and the entire assembly was lowered into the river and dragged behind a boat along the river bottom. When the tips of these hooks came in contact with an "open" freshwater mussel, the mussel clamped its valves shut around the hooks and could be lifted from the bottom. Within a short period of time millions of freshwater mussels were collected in this manner.
By 1899, there were sixty button factories in the river states of Illinois, Iowa, Missouri, and Wisconsin, employing 1,917 people. Millions of "pearl" buttons were made annually. This new button industry quickly placed a huge ecological demand on the freshwater mussels of the Midwestern United States. In 1899, clammers harvested over sixteen million pounds of freshwater mussel shells in Wisconsin alone, and the harvest of freshwater mussels in the late 1890s numbered in the tens of millions of pounds per year. Freshwater mussel beds which had previously been so dense as to virtually "carpet" miles of river bottom were almost completely harvested, leaving just a few living freshwater mussels per mile. In 1908, in what was deemed a drastic response to the rapidly declining freshwater mussel population, the U.S. Bureau of Fisheries established a mussel propagation program at the Fairport Biological Station. The purpose of this program was to regulate the overharvesting of freshwater mussels. Freshwater mussels are slow growing sessile organisms, and their reproduction is complex. The button industry in North America was in trouble because years of overharvesting the freshwater mussels had caused a shortage of freshwater mussels and pushed many of the species close to extinction. The invention of plastic and its use in manufacturing buttons during World War II replaced shell "pearl" buttons as the most popular product, and foreshadowed the end of the pearl button manufacturing business.
Taxonomy
The superfamilies and families in the order Unionida, as listed by Bieler et al (2010), with later additions to the taxonomy also shown. The use of † indicate families and superfamilies that are extinct.
Unionida are important creatures and are endangered by climate change.
Unionida
Genus †Araripenaia Silva et al., 2020 (possibly a hyriid or a trigonioid)
Superfamily †Archanodontoidea Modell, 1957 (placement in Unionida uncertain)
Family †Archanodontidae Modell, 1957
Superfamily †Deccanoidea Shah & Patel, 2024
Family †Deccanoidae Shah & Patel, 2024
Superfamily Etherioidea Deshayes, 1832
Family Etheriidae Deshayes, 1832 (About 4 species) (syn: Mulleriidae, Pseudomulleriidae)
Family Iridinidae Swainson, 1840 (About 30 species) (syn: Mutelidae, Pleiodontidae)
Family Mycetopodidae Gray, 1840 (Between 40 and 50 species)
Superfamily Hyrioidea Swainson, 1840
Family Hyriidae Swainson, 1840 (Nearly 90 species)
Superfamily †Trigonioidoidea Cox, 1952
Genus ?†Monginellopsis Silva et al., 2020
Family †Trigonioididae Cox, 1952
Family †Jilinoconchidae Ma, 1989 (placement uncertain)
Family †Nakamuranaiadidae Guo, 1981 (syn:Sinonaiinae, Nippononaiidae)
Family †Plicatounionidae Chen, 1988
Family †Pseudohyriidae Kobayashi, 1968
Family †Sainschandiidae Kolesnikov, 1977
Superfamily Unionoidea Rafinesque, 1820
Genus †Protopleurobema Delvene & Araujo, 2009
Genus †Triaslacus Bogan & Weaver, 2012 (possibly an unionid)
Family Unionidae Rafinesque, 1820 (Fewer than 700 species)
Family †Liaoningiidae Yu & Dong, 1993 (placement uncertain)
Family Margaritiferidae Henderson, 1929 (Presumably fewer than 10 species) (syn:Margaritaninae, Cumberlandiinae, Promargaritiferidae)
Family †Monginaiidae Delvene, Munt, Royo-Torres & Cobos, 2022
Family †Sancticarolitidae Simone & Mezzalira, 1997
Suborder †Silesunionina Skawina & Dzik, 2011 (paraphyletic; an intermediate group between Unionida and Trigoniida)
Superfamily †Silesunionoidea Skawina & Dzik, 2011
Genus †Cratonaia Silva et al., 2019
Family ?†Anthracosiidae Amalitzky, 1898
Family †Trigonodidae Modell, 1942
Family †Silesunionidae Skawina & Dzik, 2011
Family †Unionellidae Skawina & Dzik, 2011
| Biology and health sciences | Bivalvia | Animals |
658074 | https://en.wikipedia.org/wiki/Observational%20cosmology | Observational cosmology | Observational cosmology is the study of the structure, the evolution and the origin of the universe through observation, using instruments such as telescopes and cosmic ray detectors.
Early observations
The science of physical cosmology as it is practiced today had its subject material defined in the years following the Shapley-Curtis debate when it was determined that the universe had a larger scale than the Milky Way galaxy. This was precipitated by observations that established the size and the dynamics of the cosmos that could be explained by Albert Einstein's General Theory of Relativity. In its infancy, cosmology was a speculative science based on a very limited number of observations and characterized by a dispute between steady state theorists and promoters of Big Bang cosmology. It was not until the 1990s and beyond that the astronomical observations would be able to eliminate competing theories and drive the science to the "Golden Age of Cosmology" which was heralded by David Schramm at a National Academy of Sciences colloquium in 1992.
Hubble's law and the cosmic distance ladder
Distance measurements in astronomy have historically been and continue to be confounded by considerable measurement uncertainty. In particular, while stellar parallax can be used to measure the distance to nearby stars, the observational limits imposed by the difficulty in measuring the minuscule parallaxes associated with objects beyond our galaxy meant that astronomers had to look for alternative ways to measure cosmic distances. To this end, a standard candle measurement for Cepheid variables was discovered by Henrietta Swan Leavitt in 1908 which would provide Edwin Hubble with the rung on the cosmic distance ladder he would need to determine the distance to spiral nebula. Hubble used the 100-inch Hooker Telescope at Mount Wilson Observatory to identify individual stars in those galaxies, and determine the distance to the galaxies by isolating individual Cepheids. This firmly established the spiral nebula as being objects well outside the Milky Way galaxy. Determining the distance to "island universes", as they were dubbed in the popular media, established the scale of the universe and settled the Shapley-Curtis debate once and for all.
In 1927, by combining various measurements, including Hubble's distance measurements and Vesto Slipher's determinations of redshifts for these objects, Georges Lemaître was the first to estimate a constant of proportionality between galaxies' distances and what was termed their "recessional velocities", finding a value of about 600 km/s/Mpc. He showed that this was theoretically expected in a universe model based on general relativity. Two years later, Hubble showed that the relation between the distances and velocities was a positive correlation and had a slope of about 500 km/s/Mpc. This correlation would come to be known as Hubble's law and would serve as the observational foundation for the expanding universe theories on which cosmology is still based. The publication of the observations by Slipher, Wirtz, Hubble and their colleagues and the acceptance by the theorists of their theoretical implications in light of Einstein's General theory of relativity is considered the beginning of the modern science of cosmology.
Nuclide abundances
Determination of the cosmic abundance of elements has a history dating back to early spectroscopic measurements of light from astronomical objects and the identification of emission and absorption lines which corresponded to particular electronic transitions in chemical elements identified on Earth. For example, the element Helium was first identified through its spectroscopic signature in the Sun before it was isolated as a gas on Earth.
Computing relative abundances was achieved through corresponding spectroscopic observations to measurements of the elemental composition of meteorites.
Detection of the cosmic microwave background
A cosmic microwave background was predicted in 1948 by George Gamow and Ralph Alpher, and by Alpher and Robert Herman as due to the hot Big Bang model. Moreover, Alpher and Herman were able to estimate the temperature, but their results were not widely discussed in the community. Their prediction was rediscovered by Robert Dicke and Yakov Zel'dovich in the early 1960s with the first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1965, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. Their instrument had an excess 3.5 K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke famously quipped: "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery.
Modern observations
Today, observational cosmology continues to test the predictions of theoretical cosmology and has led to the refinement of cosmological models. For example, the observational evidence for dark matter has heavily influenced theoretical modeling of structure and galaxy formation. When trying to calibrate the Hubble diagram with accurate supernova standard candles, observational evidence for dark energy was obtained in the late 1990s. These observations have been incorporated into a six-parameter framework known as the Lambda-CDM model which explains the evolution of the universe in terms of its constituent material. This model has subsequently been verified by detailed observations of the cosmic microwave background, especially through the WMAP experiment.
Included here are the modern observational efforts that have directly influenced cosmology.
Redshift surveys
With the advent of automated telescopes and improvements in spectroscopes, a number of collaborations have been made to map the universe in redshift space. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure properties of the large-scale structure of the universe. The Great Wall, a vast supercluster of galaxies over 500 million light-years wide, provides a dramatic example of a large-scale structure that redshift surveys can detect.
The first redshift survey was the CfA Redshift Survey, started in 1977 with the initial data collection completed in 1982. More recently, the 2dF Galaxy Redshift Survey determined the large-scale structure of one section of the Universe, measuring z-values for over 220,000 galaxies; data collection was completed in 2002, and the final data set was released 30 June 2003. (In addition to mapping large-scale patterns of galaxies, 2dF established an upper limit on neutrino mass.) Another notable investigation, the Sloan Digital Sky Survey (SDSS), is ongoing and aims to obtain measurements on around 100 million objects. SDSS has recorded redshifts for galaxies as high as 0.4, and has been involved in the detection of quasars beyond z = 6. The DEEP2 Redshift Survey uses the Keck telescopes with the new "DEIMOS" spectrograph; a follow-up to the pilot program DEEP1, DEEP2 is designed to measure faint galaxies with redshifts 0.7 and above, and it is therefore planned to provide a complement to SDSS and 2dF.
Cosmic microwave background experiments
Telescope observations
Radio
The brightest sources of low-frequency radio emission (10 MHz and 100 GHz) are radio galaxies which can be observed out to extremely high redshifts. These are subsets of the active galaxies that have extended features known as lobes and jets which extend away from the galactic nucleus distances on the order of megaparsecs. Because radio galaxies are so bright, astronomers have used them to probe extreme distances and early times in the evolution of the universe.
Infrared
Far infrared observations including submillimeter astronomy have revealed a number of sources at cosmological distances. With the exception of a few atmospheric windows, most of infrared light is blocked by the atmosphere, so the observations generally take place from balloon or space-based instruments. Current observational experiments in the infrared include NICMOS, the Cosmic Origins Spectrograph, the Spitzer Space Telescope, the Keck Interferometer, the Stratospheric Observatory For Infrared Astronomy, and the Herschel Space Observatory. The next large space telescope planned by NASA, the James Webb Space Telescope will also explore in the infrared.
An additional infrared survey, the Two-Micron All Sky Survey, has also been very useful in revealing the distribution of galaxies, similar to other optical surveys described below.
Optical rays (visible to human eyes)
Optical light is still the primary means by which astronomy occurs, and in the context of cosmology, this means observing distant galaxies and galaxy clusters in order to learn about the large scale structure of the Universe as well as galaxy evolution. Redshift surveys have been a common means by which this has been accomplished with some of the most famous including the 2dF Galaxy Redshift Survey, the Sloan Digital Sky Survey, and the upcoming Large Synoptic Survey Telescope. These optical observations generally use either photometry or spectroscopy to measure the redshift of a galaxy and then, via Hubble's law, determine its distance modulo redshift distortions due to peculiar velocities. Additionally, the position of the galaxies as seen on the sky in celestial coordinates can be used to gain information about the other two spatial dimensions.
Very deep observations (which is to say sensitive to dim sources) are also useful tools in cosmology. The Hubble Deep Field, Hubble Ultra Deep Field, Hubble Extreme Deep Field, and Hubble Deep Field South are all examples of this.
Ultraviolet
See Ultraviolet astronomy.
X-rays
See X-ray astronomy.
Gamma-rays
See Gamma-ray astronomy.
Cosmic ray observations
See Cosmic-ray observatory.
Future observations
Cosmic neutrinos
It is a prediction of the Big Bang model that the universe is filled with a neutrino background radiation, analogous to the cosmic microwave background radiation. The microwave background is a relic from when the universe was about 380,000 years old, but the neutrino background is a relic from when the universe was about two seconds old.
If this neutrino radiation could be observed, it would be a window into very early stages of the universe. Unfortunately, these neutrinos would now be very cold, and so they are effectively impossible to observe directly.
Gravitational waves
| Physical sciences | Physical cosmology | Astronomy |
658084 | https://en.wikipedia.org/wiki/Magnifying%20glass | Magnifying glass | A magnifying glass is a convex lens that is used to produce a magnified image of an object. The lens is usually mounted in a frame with a handle. Beyond its primary function of magnification, this simple yet ingenious tool serves a variety of purposes. It can be employed to focus sunlight, harnessing the Sun's rays to create a concentrated hot spot at the lens's focus, which is often used for starting fires.
In another innovative form, the magnifying glass can manifest as a sheet magnifier, employing numerous slender, concentric, ring-shaped lenses. These are collectively known as a Fresnel lens, which, despite being significantly thinner, operates effectively as a single lens. This particular design finds its utility in applications such as screen magnifiers for TVs, offering a lightweight and efficient solution for enlarging visuals.
The cultural impact of the magnifying glass extends far into the realms of literature and pop culture, symbolizing the pursuit of truth and the uncovering of secrets. It is famously associated with the investigative work of fictional detectives, with Sherlock Holmes being the most iconic figure to wield it, cementing its status as an emblem of detective fiction. Through its various forms and functions, the magnifying glass remains a tool of both practical utility and significant symbolic value.
History
"The evidence indicates that the use of lenses was widespread throughout the Middle East and the Mediterranean basin over several millennia". Archaeological findings from the 1980s in Crete's Idaean Cave unearthed rock crystal lenses dating back to the Archaic Greek period, showcasing exceptional optical quality. These discoveries suggest that the use of lenses for magnification and possibly for starting fires was widespread in the Mediterranean and Middle East, indicating an advanced understanding of optics in antiquity. The earliest explicit written evidence of a magnifying device is a joke in Aristophanes's The Clouds from 424 BC, where magnifying lenses to ignite tinder were sold in a pharmacy, and Pliny the Elder's "lens", a glass globe filled with water, used to cauterize wounds. (Seneca wrote that it could be used to read letters "no matter how small or dim".) A convex lens used for forming a magnified image was described in the Book of Optics by Ibn al-Haytham in 1021. After the book was translated during the Latin translations of the 12th century, Roger Bacon described the properties of a magnifying glass in 13th-century England. This was followed by the development of eyeglasses in 13th-century Italy. Building on this foundation, in the late 1500s, two Dutch spectacle makers Jacob Metius and Zacharias Janssen crafted the compound microscope by assembling several magnifying lenses in a tube, marking a significant advancement in optical instruments. Not long after, Hans Lipperhey introduced the telescope in 1608 and Galileo Galilei improving on the device in 1609, employing the magnifying lens in an innovative manner, further extending the application of optical technologies developed through the ages.
Magnification
The magnification of a magnifying glass depends upon where it is placed between the user's eye and the object being viewed, and the total distance between them. The magnifying power is equivalent to angular magnification (this should not be confused with optical power, which is a different quantity). The magnifying power is the ratio of the sizes of the images formed on the user's retina with and without the lens. For the "without" case, it is typically assumed that the user would bring the object as close to one eye as possible without it becoming blurry. This point, known as the near point of accommodation, varies with age. In a young child, it can be as close as 5 cm, while, in an elderly person it may be as far as one or two metres. Magnifiers are typically characterized using a "standard" value of 0.25 m.
A magnifying glass operates as the simplest form of optical instrument. It is essentially a hand-held lens that converges light to produce an enlarged, upright image that appears to stand where light doesn't actually converge, known as a 'virtual' image. To view an item in greater detail, it is positioned between the lens and its focal point, and the optimal observation occurs when the image is at the closest distance at which the eye can focus comfortably. The lens's magnification is the ratio of the image's apparent height to the object's actual height, correlating to the proportion of the distances from the image to the lens and the object to the lens. Moving the object nearer to the lens amplifies this effect, increasing magnification.
The highest magnifying power is obtained by putting the lens very close to one eye, and moving the eye and the lens together to obtain the best focus. The object will then typically also be close to the lens. The magnifying power obtained in this condition is MP0 = (0.25 m)Φ + 1, where Φ is the optical power in dioptres, and the factor of 0.25 m represents the assumed near point (¼ m from the eye). This value of the magnifying power is the one normally used to characterize magnifiers. It is typically denoted "m×", where m = MP0. This is sometimes called the total power of the magnifier (again, not to be confused with optical power).
However, magnifiers are not always used as described above because it is more comfortable to put the magnifier close to the object (one focal length away). The eye can then be a larger distance away, and a good image can be obtained very easily; the focus is not very sensitive to the eye's exact position. The magnifying power in this case is roughly MP = (0.25 m)Φ.
A typical magnifying glass might have a focal length of 25 cm, corresponding to an optical power of 4 dioptres. Such a magnifier would be sold as a "2×" magnifier. In actual use, an observer with "typical" eyes would obtain a magnifying power between 1 and 2, depending on where lens is held.
Practical uses
A magnifying glass can serve as a fire-starting tool in survival situations. Any transparent lens with significant magnifying ability, such as a standard magnifying glass or a jeweler's loupe, can concentrate sunlight to ignite tinder. The technique involves positioning the lens to focus a small, intense spot of light onto the tinder, awaiting ignition with patience. The advantage of this method is the simplicity of the lens and the minimal effort required. However, its effectiveness is contingent upon clear, strong sunlight, which may be inconsistent depending on geographic location and time of year.
Beyond survival uses, magnifying glasses are invaluable tools for jewelers and hobbyists. Jewelers rely on them to scrutinize the quality and authenticity of precious gems, ensuring accurate evaluations. Hobbyists, from those engaged in sewing and needlework to stamp collectors, depend on magnifying glasses for detailed work, enhancing both precision and enjoyment. This versatility underlines the magnifying glass's enduring utility across a spectrum of activities, from professional assessments to leisure pursuits.
Advanced digital magnifiers and apps have emerged as modern alternatives to traditional magnifying glasses, offering features such as variable magnification levels, high-contrast modes, and text-to-speech for visually impaired users. These tools not only magnify text and objects but also enhance readability and accessibility, making them invaluable for daily living and educational purposes.
Alternatives
Magnifying glasses typically have low magnifying power: 2×–6×, with the lower-power types being much more common. At higher magnifications, the image quality of a simple magnifying glass becomes poor due to optical aberrations, particularly spherical aberration. When more magnification or a better image is required, other types of hand magnifier are typically used. A Coddington magnifier provides higher magnification with improved image quality. Even better images can be obtained with a multiple-lens magnifier, such as a Hastings triplet. High power magnifiers are sometimes mounted in a cylindrical or conical holder with no handle, often designed to be worn on the head; this is called a loupe.
Such magnifiers can reach up to about 30×, and at these magnifications the aperture of the magnifier becomes very small and it must be placed very close to both the object and the eye. For more convenient use or for magnification beyond about 30×, a microscope is necessary.
Fresnel lenses are used as magnifiers, for example for reading printed text.
Use as a symbol
The magnifying glass icon (🔍), represented by U+1F50D in Unicode, has evolved into a universal symbol for searching and zooming functions in digital interfaces. Originating from its practical use for detailed examination and discovery, it has been adopted by modern computer software and websites to denote tools for users to find information or closely inspect content. The right-pointing version, U+1F50E (🔎), continues this theme, often used to initiate searches. The integration of these icons into user interface design reflects the intuitive connection between the physical act of magnifying to see more clearly and the metaphorical act of searching for information in the digital space.
Beyond its digital symbolization for search functions, the magnifying glass also holds a place in educational symbolism, often representing curiosity, exploration, and the quest for knowledge. Educational institutions and programs frequently use the magnifying glass in logos and materials to emphasize the importance of inquiry and discovery in learning.
| Technology | Optical instruments | null |
658095 | https://en.wikipedia.org/wiki/Olm | Olm | The olm () or proteus (Proteus anguinus) is an aquatic salamander which is the only species in the genus Proteus of the family Proteidae and the only exclusively cave-dwelling chordate species found in Europe; the family's other extant genus is Necturus. In contrast to most amphibians, it is entirely aquatic, eating, sleeping, and breeding underwater. Living in caves found in the Dinaric Alps, it is endemic to the waters that flow underground through the extensive limestone bedrock of the karst of Central and Southeastern Europe in the basin of the Soča River () near Trieste, Italy, southern Slovenia, southwestern Croatia, and Bosnia and Herzegovina. Introduced populations are found near Vicenza, Italy, and Kranj, Slovenia. It was first mentioned in 1689 by the local naturalist Valvasor in his Glory of the Duchy of Carniola, who reported that, after heavy rains, the olms were washed up from the underground waters and were believed by local people to be a cave dragon's offspring.
This cave salamander is most notable for its adaptations to a life of complete darkness in its underground habitat. The olm's eyes are undeveloped, leaving it blind, while its other senses, particularly those of smell and hearing, are acutely developed. Most populations also lack any pigmentation in their skin. The olm has three toes on its forelimbs, but only two toes on its hind feet. It exhibits neoteny, retaining larval characteristics like external gills into adulthood, like some American amphibians, the axolotl and the mudpuppies (Necturus).
Etymology
The word olm is a German loanword that was incorporated into English in the late 19th century. The origin of the German or 'cave olm', is unclear. It may be a variant of the word 'salamander'.
Common names
It is also called the "human fish" by locals because of its fleshy skin color (translated literally from , , , , ), as well as "cave salamander" or "white salamander". In Slovenia, it is called močeril (from *močerъ 'earthworm, damp creepy-crawly'; moča 'dampness').
Description
External appearance
The olm's body is snakelike, long, with some specimens reaching up to , which makes them some of the largest cave-dwelling animals in the world. The average length is between 23 and 25 cm. Females grow larger than males, but otherwise the primary external difference between the sexes is in the cloaca region (shape and size) when breeding. The trunk is cylindrical, uniformly thick, and segmented with regularly spaced furrows at the myomere borders. The tail is relatively short, laterally flattened, and surrounded by a thin fin. The limbs are small and thin, with a reduced number of digits compared to other amphibians: the front legs have three digits instead of the normal four, and the rear have two digits instead of five. Its body is covered by a thin layer of skin, which contains very little of the pigment riboflavin, making it yellowish-white or pink in color.
The white skin color of the olm retains the ability to produce melanin, and will gradually turn dark when exposed to light; in some cases the larvae are also colored. One population, the black olm, is always pigmented and dark brownish to blackish when adult. The olm's pear-shaped head ends with a short, dorsoventrally flattened snout. The mouth opening is small, with tiny teeth forming a sieve to keep larger particles inside the mouth. The nostrils are so small as to be imperceptible, but are placed somewhat laterally near the end of the snout. The regressed eyes are covered by a layer of skin. The olm breathes with external gills that form two branched tufts at the back of the head. They are red in color because the oxygen-rich blood shows through the non-pigmented skin. The olm also has rudimentary lungs, but their role in respiration is only accessory, except during hypoxic conditions.
Sensory organs
Cave-dwelling animals have been prompted, among other adaptations, to develop and improve non-visual sensory systems in order to orient in and adapt to permanently dark habitats. The olm's sensory system is also adapted to life in the subterranean aquatic environment. Unable to use vision for orientation, the olm compensates with other senses, which are better developed than in amphibians living on the surface. It retains larval proportions, like a long, slender body and a large, flattened head, and is thus able to carry a larger number of sensory receptors.
Photoreceptors
Although blind, the olm swims away from light. The eyes are regressed, but retain sensitivity. They lie deep below the dermis of the skin and are rarely visible except in some younger adults. Larvae have normal eyes, but development soon stops and they start regressing, finally atrophying after four months of development. The pineal body also has photoreceptive cells which, though regressed, retain visual pigment like the photoreceptive cells of the regressed eye. The pineal gland in Proteus probably possesses some control over the physiological processes. Behavioral experiments revealed that the skin itself is also sensitive to light. Photosensitivity of the integument is due to the pigment melanopsin inside specialized cells called melanophores. Preliminary immunocytochemical analyses support the existence of photosensitive pigment also in the animal's integument.
Chemoreceptors
The olm is capable of sensing very low concentrations of organic compounds in the water. They are better at sensing both the quantity and quality of prey by smell than related amphibians. The nasal epithelium, located on the inner surface of the nasal cavity and in the Jacobson's organ, is thicker than in other amphibians. The taste buds are in the mucous epithelium of the mouth, most of them on the upper side of the tongue and on the entrance to the gill cavities. Those in the oral cavity are used for tasting food, where those near the gills probably sense chemicals in the surrounding water.
Mechano- and electroreceptors
The sensory epithelia of the inner ear are very specifically differentiated, enabling the olm to receive sound waves in the water, as well as vibrations from the ground. The complex functional-morphological orientation of the sensory cells enables the animal to register the sound sources. As this animal stays neotenic throughout its long life span, it is only occasionally exposed to normal adult hearing in air, which is probably also possible for Proteus as in most salamanders. Hence, it would be of adaptive value in caves, with no vision available, to profit from underwater hearing by recognizing particular sounds and eventual localization of prey or other sound sources, i.e. acoustical orientation in general.
Behavioural (ethological) tests have shown that its sensitivity for detecting underwater sound waves reaches into the area of frequencies of sound waves between 10 and more than 12,000 Hz, while the greatest sensitivity is reached between 1,500 and 2,000 Hz. The ethological experiments indicate that the best hearing sensitivity of Proteus is between 10 Hz and up to 12,000 Hz. The lateral line supplements inner ear sensitivity by registering low-frequency nearby water displacements.
A new type of electroreception sensory organ has been analyzed on the head of Proteus, utilizing light and electron microscopy. These new organs have been described as ampullary organs.
Like some other lower vertebrates, the olm has the ability to register weak electric fields. Some behavioral experiments suggest that the olm may be able to use Earth's magnetic field to orient itself. In 2002, Proteus anguinus was found to align itself with natural and artificially modified magnetic fields.
Ecology and life history
The olm lives in well-oxygenated underground waters with a typical, very stable temperature of , infrequently as warm as . There have also been observations in northeastern Italy where they swim to the surface in springs outside the caves, even in daylight, where they occasionally feed on earthworms. The black olm may occur in surface waters that are somewhat warmer.
The olm swims by eel-like twisting of its body, assisted only slightly by its poorly developed legs. It is a predatory animal, feeding on small crustaceans (for example, Troglocaris shrimp, Niphargus, Asellus, and Synurella amphipods, and Oniscus asellus), snails (for example, Belgrandiella), and occasionally insects and insect larvae (for example, Trichoptera, Ephemeroptera, Plecoptera, and Diptera). It does not chew its food, instead swallowing it whole. The olm is resistant to long-term starvation, an adaptation to its underground habitat. It can consume large amounts of food at once, and store nutrients as large deposits of lipids and glycogen in the liver. When food is scarce, it reduces its activity and metabolic rate, and can also reabsorb its own tissues in severe cases. Controlled experiments have shown that an olm can survive up to 10 years without food.
Olms are gregarious, and usually aggregate either under stones or in fissures. Sexually active males are an exception, establishing and defending territories where they attract females. The scarcity of food makes fighting energetically costly, so encounters between males usually only involve display. This is a behavioral adaptation to life underground.
Breeding and longevity
Reproduction has only been observed in captivity so far. Sexually mature males have swollen cloacas, brighter skin color, two lines at the side of the tail, and slightly curled fins. No such changes have been observed in the females. The male can start courtship even without the presence of a female. He chases other males away from the chosen area, and may then secrete a female-attracting pheromone. When the female approaches, he starts to circle around her and fan her with his tail. Then he starts to touch the female's body with his snout, and the female touches his cloaca with her snout. At that point, he starts to move forward with a twitching motion, and the female follows. He then deposits the spermatophore, and the animals keep moving forward until the female hits it with her cloaca, after which she stops and stands still. The spermatophore sticks to her and the sperm cells swim inside her cloaca, where they attempt to fertilize her eggs. The courtship ritual can be repeated several times over a couple of hours.
The female lays up to 70 eggs, each about in diameter, and places them between rocks, where they remain under her protection. The average is 35 eggs and the adult female typically breeds every 12.5 years. The tadpoles are long when they hatch and live on yolk stored in the cells of the digestive tract for a month.
At a temperature of , the olm's embryonic development (time in the eggs before hatching) is 140 days, but it is somewhat slower in colder water and faster in warmer, being as little as 86 days at . After hatching, it takes another 14 years to reach sexual maturity if living in water that is . The larvae gain adult appearance after nearly four months, with the duration of development strongly correlating with water temperature. Unconfirmed historical observations of viviparity exist, but it has been shown that the females possess a gland that produces the egg casing, similar to those of fish and egg-laying amphibians. Paul Kammerer reported that female olm gave birth to live young in water at or below and laid eggs at higher, but rigorous observations have not confirmed that. The olm appears to be exclusively oviparous.
Development of the olm and other troglobite amphibians is characterized by heterochrony – the animal does not undergo metamorphosis and instead retains larval features. The form of heterochrony in the olm is neoteny – delayed somatic maturity with precocious reproductive maturity, i.e. reproductive maturity is reached while retaining the larval external morphology. In other amphibians, the metamorphosis is regulated by the hormone thyroxine, secreted by the thyroid gland. The thyroid is normally developed and functioning in the olm, so the lack of metamorphosis is due to the unresponsiveness of key tissues to thyroxine.
Longevity is estimated at up to 58 years. A study published in Biology Letters estimated that they have a maximum lifespan of over 100 years and that the lifespan of an average adult is around 68.5 years. When compared to the longevity and body mass of other amphibians, olms are outliers, living longer than would be predicted from their size.
Taxonomic history
Olms from different cave systems differ substantially in body measurements, color, and some microscopic characteristics. Earlier researchers used these differences to support the division into five species, while modern herpetologists understand that external morphology is not reliable for amphibian systematics and can be extremely variable, depending on nourishment, illness, and other factors; even varying among individuals in a single population. Proteus anguinus is now considered a single species. The length of the head is the most obvious difference between the various populations – individuals from Stična, Slovenia, have shorter heads on average than those from Tržič, Slovenia, and the Istrian peninsula, for example.
Black olm
The black olm (Proteus anguinus parkelj Sket & Arntzen, 1994) is the only recognized subspecies of the olm other than the nominate subspecies. It is endemic to the underground waters near Črnomelj, Slovenia, an area smaller than . It was first found in 1986 by members of the Slovenian Karst Research Institute, who were exploring the water from Dobličica karst spring in the White Carniola region.
It has several features separating it from the nominotypical subspecies (Proteus a. anguinus):
Proteus bavaricus
A potential species, Proteus bavaricus, is speculated to be closely related to P. anguinus. The species was described from a single bone by George Brunner, and the holotype is housed in his private collection. It was found in Bavaria's Devil's Cave, in the Pleistocene layer. In his 1998 book, J. Alan Hollman described the species as a "problematic" taxon, saying that Brunner's drawing of the bone does not adequately show the differences between P. bavaricus and P. anguinus.
Research history
The first written mention of the olm is in Johann Weikhard von Valvasor's The Glory of the Duchy of Carniola (1689) as a baby dragon. Heavy rains of Slovenia would wash the olms up from their subterranean habitat, giving rise to the folklore belief that great dragons lived beneath the Earth's crust, and the olms were the undeveloped offspring of these mythical beasts. In his book Valvasor compiled the local Slovenian folk stories and pieced together the rich mythology of the creature and documented observations of the olm as "Barely a span long, akin to a lizard, in short, a worm and vermin of which there are many hereabouts".
The first researcher to retrieve a live olm was a physician and researcher from Idrija, Giovanni Antonio Scopoli, who sent dead specimens and drawings to colleagues and collectors. Josephus Nicolaus Laurenti, though, was the first to briefly describe the olm in 1768 and give it the scientific name Proteus anguinus. It was not until the end of the century that Carl Franz Anton Ritter von Schreibers from the Naturhistorisches Museum of Vienna started to look into this animal's anatomy. The specimens were sent to him by Sigmund Zois. Schreibers presented his findings in 1801 to The Royal Society in London, and later also in Paris. Soon, the olm started to gain wide recognition and attract significant attention, resulting in thousands of animals being sent to researchers and collectors worldwide. A Dr Edwards was quoted in a book of 1839 as believing that "...the Proteus Anguinis is the first stage of an animal prevented from growing to perfection by inhabiting the subterraneous waters of Carniola."
In 1880 Marie von Chauvin began the first long-term study of olms in captivity. She learned that they detected prey's motion, panicked when a heavy object was dropped near their habitat, and developed color if exposed to weak light for a few hours a day, but could not cause them to change to a land-dwelling adult form, as she and others had done with axolotl.
The basis of functional morphological investigations in Slovenia was set up by in the 1980s. More than twenty years later, the Research Group for functional morphological Studies of the Vertebrates in the Department of Biology (Biotechnical Faculty, University of Ljubljana), is one of the leading groups studying the olm under the guidance of Boris Bulog. There are also several cave laboratories in Europe in which olms have been introduced and are being studied. These are Moulis, Ariège (France), Choranche cave (France), Han-sur-Lesse (Belgium), and Aggtelek (Hungary). They were also introduced into the Hermannshöhle (Germany) and Oliero (Italy) caves, where they still live today. Additionally, there is evidence that a small number of olms were introduced to the United Kingdom in the 1940s, although it's highly likely that the animals perished shortly after being released.
The olm was used by Charles Darwin in his seminal work On the Origin of Species as an example for the reduction of structures through disuse:
An olm (Proteus) genome project is currently underway by the University of Ljubljana and BGI. With an estimated genome size roughly 15-times the size of human genome, this will likely be the largest animal genome sequenced so far.
Conservation
The olm is extremely vulnerable to changes in its environment, due to its adaptation to the specific conditions in caves. Water resources in the karst are extremely sensitive to all kinds of pollution. The contamination of the karst underground waters is due to the large number of waste disposal sites leached by rainwater, as well as to the accidental overflow of various liquids. The reflection of such pollution in the karst underground waters depends on the type and quantity of pollutants, and on the rock structure through which the waters penetrate. Self-purification processes in the underground waters are not completely understood, but they are quite different from those in surface waters.
Among the most serious chemical pollutants are chlorinated hydrocarbon pesticides, fertilizers, polychlorinated biphenyls (PCBs), which are or were used in a variety of industrial processes and in the manufacture of many kinds of materials; and metals such as mercury, lead, cadmium, and arsenic. All of these substances persist in the environment, being slowly, if at all, degraded by natural processes. In addition, all are toxic to life if they accumulate in any appreciable quantity. The olm is nevertheless noted for its capability of surviving higher concentrations of accumulated PCBs than related aquatic organisms.
The olm was included in annexes II and IV of the 1992 EU Habitats Directive (92/43/EEC). The list of species in annex II, combined with the habitats listed in annex I, is used by individual countries to designate protected areas known as 'Special Areas of Conservation'. These areas, combined with others created by the older Birds Directive were to form the Natura 2000 network. Annex IV additionally lists "animal and plant species of community interest in need of strict protection", although this has little legal ramifications. Areas inhabited by the olm were eventually included in the Slovenian, Italian and Croatian parts of the Natura 2000 network.
The olm was first protected in Slovenia in 1922 along with all cave fauna, but the protection was not effective and a substantial black market came into existence. In 1982 it was placed on a list of rare and endangered species. This list also had the effect of prohibiting trade of the species. After joining the European Union in 2004, Slovenia had to establish mechanisms for protection of the species included in the EU Habitats Directive. The olm is included in a Slovenian Red list of endangered species, thus its capturing or killing is allowed only under specific circumstances determined by the local authorities (e.g. scientific study).
In Croatia, the olm is protected by the legislation designed to protect amphibians – collecting is possible only for research purposes by permission of the National Administration for Nature and Environment Protection. As of 2020 the Croatian population has been assessed as 'critically endangered' in Croatia. As of 1999, the environmental laws in Bosnia and Herzegovina and Montenegro had not yet been clarified for this species.
In the 1980s the IUCN claimed that some illegal collection of this species for the pet trade took place, but that the extent of this was unknown: this text has been copied into subsequent assessments, but by now the anecdotic claims are not considered to be indicative of a major threat. Since the 1980s until the most recent assessment in 2022 the organisation has rated the conservation status for the IUCN Red List as 'vulnerable', this because of its natural distribution being fragmented over a number of cave systems as opposed to being continuous, and what they consider a decline in extent and quality of its habitat, which they assume means that the population has been decreasing for the last 40 years.
Zagreb Zoo in Croatia houses the olm. Historically, olms were kept in several zoos in Germany, as well as in Belgium, the Netherlands, Slovenia and the United Kingdom. At present they can only be experienced at Zagreb Zoo, Hermannshöhle in Germany and Vivarium Proteus (Proteus Vivarium) within Postojnska jama (Postojna Cave) in Slovenia. There are also captive breeding programs in places like France.
Cultural significance
The olm is a symbol of Slovenian natural heritage. The enthusiasm of scientists and the broader public about this inhabitant of Slovenian caves is still strong 300 years after its discovery. Postojna Cave is one of the birthplaces of biospeleology due to the olm and other rare cave inhabitants, such as the blind cave beetle. The image of the olm contributes significantly to the fame of Postojna Cave, which Slovenia successfully utilizes for the promotion of ecotourism in Postojna and other parts of Slovenian karst. Tours of Postojna Cave also include a tour around the speleobiological station – the Proteus vivarium, showing different aspects of the cave environment.
The olm was also depicted on one of the Slovenian tolar coins. It was also the namesake of Proteus, the oldest Slovenian popular science magazine, first published in 1933.
| Biology and health sciences | Salamanders and newts | Animals |
658141 | https://en.wikipedia.org/wiki/Peridotite | Peridotite | Peridotite ( ) is a dense, coarse-grained igneous rock consisting mostly of the silicate minerals olivine and pyroxene. Peridotite is ultramafic, as the rock contains less than 45% silica. It is high in magnesium (Mg2+), reflecting the high proportions of magnesium-rich olivine, with appreciable iron. Peridotite is derived from Earth's mantle, either as solid blocks and fragments, or as crystals accumulated from magmas that formed in the mantle. The compositions of peridotites from these layered igneous complexes vary widely, reflecting the relative proportions of pyroxenes, chromite, plagioclase, and amphibole.
Peridotite is the dominant rock of the upper part of Earth's mantle. The compositions of peridotite nodules found in certain basalts are of special interest along with diamond pipes (kimberlite), because they provide samples of Earth's mantle brought up from depths ranging from about 30 km to 200 km or more. Some of the nodules preserve isotope ratios of osmium and other elements that record processes that occurred when Earth was formed, and so they are of special interest to paleogeologists because they provide clues to the early composition of Earth's mantle and the complexities of the processes that occurred.
The word peridotite comes from the gemstone peridot, which consists of pale green olivine. Classic peridotite is bright green with some specks of black, although most hand samples tend to be darker green. Peridotitic outcrops typically range from earthy bright yellow to dark green; this is because olivine is easily weathered to iddingsite. While green and yellow are the most common colors, peridotitic rocks may exhibit a wide range of colors including blue, brown, and red.
Classification
Coarse-grained igneous rocks in which mafic minerals (minerals rich in magnesium and iron) make up over 90% of the volume of the rock are classified as ultramafic rocks. Such rocks typically contain less than 45% silica. Ultramafic rocks are further classified by their relative proportions of olivine, orthopyroxene, clinopyroxene, and hornblende, which are the most abundant families of mafic minerals in most ultramafic rocks. Peridotite is then defined as coarse-grained ultramafic rock in which olivine makes up 40% or more of the total volume of these four mineral families in the rock.
Peridotites are further classified as follows:
Dunite: more than 90% olivine
Dunite is found as prominent veins in the peridotite layer of ophiolites, which are interpreted as slices of oceanic lithosphere (crust and upper mantle) thrust onto continents. Dunite also occurs as a cumulate in layered intrusions, where olivine crystallized out of a slowly cooling body of magma and accumulated on the floor of the magma body to form the lowest layer of the intrusion. Dunite almost always contains accessory chromite.
Kimberlite: formed in volcanic pipes and at least 35% olivine
Kimberlite is a highly brecciated variant of peridotite formed in volcanic pipes and is known for being the host rock to diamonds. Unlike other forms of peridotite, kimberlite is quite rare.
Pyroxene peridotite: From 40% to 90% olivine and less than 5% hornblende
Harzburgite: less than 5% clinopyroxene
Harzburgite makes up the bulk of the peridotite layer of ophiolites. It is interpreted as depleted mantle rock, from which basaltic magma has been extracted. It also forms as a cumulate in Type I layered intrusions, forming a layer just above the dunite layer. Harzburgite likely makes up most of the mantle lithosphere underneath continental cratons.
Wehrlite: less than 5% orthopyroxene
Wehrlite makes up part of the transition zone between the peridotite layer and overlying gabbro layer of ophiolites. In Type II layered intrusions, it takes the place of harzburgite as the layer just above the dunite layer.
Lherzolite: intermediate content of clinopyroxene and orthopyroxene
Lherzolite is thought to make up much of the upper mantle. It has almost exactly the composition of a mixture of three parts harzburgite and one part tholeiitic basalt (pyrolite) and is the likely source rock for basaltic magma. It is found as rare xenoliths in basalt, such as those of Kilbourne Hole in southern New Mexico, US, and at Oahu, Hawaii, US.
Hornblende peridotite: From 40% to 90% olivine and less than 5% pyroxene
Hornblende peridotite is found as rare xenoliths in andesites above subduction zones. They are direct evidence of alteration of mantle rock by fluids released by the subducting slab.
Pyroxene hornblende peridotite: Intermediate between pyroxene peridotite and hornblende peridotite
Pyroxene hornblende peridotite is found as rare xenoliths, such as those of Wilcza Góra in southwest Poland. Here it likely formed by alteration of mantle rock by carbonated hydrous silicic fluids associated with volcanism.
Composition
Mantle peridotite is highly enriched in magnesium, with a typical magnesium number of 89. In other words, of the total content of iron plus magnesium, 89 mol% is magnesium. This is reflected in the composition of the mafic minerals making up the peridotite.
Olivine is the essential mineral found in all peridotites. It is an iron-magnesium orthosilicate with the variable formula . The magnesium-rich olivine of peridotites is typically olive-green in color.
Pyroxenes are chain silicates having the variable formula comprising a large group of different minerals. These are divided into orthopyroxenes (with an orthorhombic crystal structure) and clinopyroxenes (with a monoclinic crystal structure). This distinction is important in the classification of pyroxene peridotites since clinopyroxene melts more easily than orthopyroxene or olivine. The most common orthopyroxene is enstatite, , in which iron substitutes for some of the magnesium. The most important clinopyroxene is diopside, , again with some substitution of iron for magnesium (hedenbergite, ). Ultramafic rock in which the fraction of pyroxenes exceeds 60% are classified as pyroxenites rather than peridotites. Pyroxenes are typically dark in color.
Hornblende is an amphibole, a group of minerals resembling pyroxenes but with a double chain structure incorporating water. Hornblende itself has a highly variable composition, ranging from tschermakite () to pargasite () with many other variations in composition. It is present in peridotite mostly as a consequence of alteration by hydrous fluids.
Although peridotites are classified by their content of olivine, pyroxenes, and hornblende, a number of other mineral families are characteristically present in peridotites and may make up a significant fraction of their composition. For example, chromite is sometimes present in amounts of up to 50%. (A chromite composition above 50% reclassifies the rock as a peridotitic chromitite.) Other common accessory minerals include spinel, garnet, biotite, or magnetite. A peridotite containing significant amounts of one of these minerals may have its classification refined accordingly; for example, if a lhertzolite contains up to 5% spinel, it is a spinel-bearing lhertzolite, while for amounts up to 50%, it would be classified as a spinel lhertzolite. The accessory minerals can be useful for estimating the depth of formation of the peridotite. For example, the aluminium in lhertzolite is present as plagioclase at depths shallower than about , while it is present as spinel between 20 km and and as garnet below 60 km.
Distribution and location
Peridotite is the dominant rock of the Earth's mantle above a depth of about 400 km; below that depth, olivine is converted to the higher-pressure mineral wadsleyite.
Oceanic plates consist of up to about 100 km of peridotite covered by a thin crust. The crust, commonly about 6 km thick, consists of basalt, gabbro, and minor sediments. The peridotite below the ocean crust, "abyssal peridotite," is found on the walls of rifts in the deep sea floor. Oceanic plates are usually subducted back into the mantle in subduction zones. However, pieces can be emplaced into or overthrust on continental crust by a process called obduction, rather than carried down into the mantle. The emplacement may occur during orogenies, as during collisions of one continent with another or with an island arc. The pieces of oceanic plates emplaced within continental crust are referred to as ophiolites. Typical ophiolites consist mostly of peridotite plus associated rocks such as gabbro, pillow basalt, diabase sill-and-dike complexes, and red chert. Alpine peridotite or orogenic peridotite massif is an older term for an ophiolite emplaced in a mountain belt during a continent-continent plate collision.
Peridotites also occur as fragments (xenoliths) carried up by magmas from the mantle. Among the rocks that commonly include peridotite xenoliths are basalt and kimberlite. Although kimberlite is a variant of peridotite, kimberlite is also considered as brecciated volcanic material as well, which is why it is referred to as a source of peridotite xenoliths. Peridotite xenoliths contain osmium and other elements whose stable isotope ratios provide clues on the formation and evolution of the Earth's mantle. Such xenoliths originate from depths of up to nearly or more.
The volcanic equivalent of peridotites are komatiites, which were mostly erupted early in the Earth's history and are rare in rocks younger than Archean in age.
Small pieces of peridotite have been found in lunar breccias.
The rocks of the peridotite family are uncommon at the surface and are highly unstable, because olivine reacts quickly with water at typical temperatures of the upper crust and at the Earth's surface. Many, if not most, surface outcrops have been at least partly altered to serpentinite, a process in which the pyroxenes and olivines are converted to green serpentine. This hydration reaction involves considerable increase in volume with concurrent deformation of the original textures. Serpentinites are mechanically weak and so flow readily within the earth. Distinctive plant communities grow in soils developed on serpentinite, because of the unusual composition of the underlying rock. One mineral in the serpentine group, chrysotile, is a type of asbestos.
Color, morphology, and texture
Most peridotite is green in color due to its high olivine content. However, peridotites can range in color from greenish-gray to nearly black to pale yellowish-green. Peridotite weathers to form a distinctive brown crust in subaerial exposures and to a deep orange color in submarine exposures.
Peridotites can take on a massive form or may be in layers on a variety of size scales. Layered peridotites may form the base layers of layered intrusions. These are characterized by cumulate textures, characterized by a fabric of coarse (>5mm) interlocking euhedral (well-formed) crystals in a groundmass of finer crystals formed from liquid magma trapped in the cumulate. Many show poikilitic texture in which crystallization of this liquid has produced crystals that overgrow and enclose the original cumulus crystals (called chadrocrysts).
Another texture is a well-annealed texture of equal sized anhedral crystals with straight grain boundaries intersecting at 120°. This may result when slow cooling allowed recrystallization to minimize surface energy. Cataclastic texture, showing irregular fractures and deformation twinning of olivine grains, is common in peridotites because of the deformation associated with their tectonic mode of emplacement.
Origin
Peridotites have two primary modes of origin: as mantle rocks formed during the accretion and differentiation of the Earth, or as cumulate rocks formed by precipitation of olivine ± pyroxenes from basaltic or ultramafic magmas. These magmas are ultimately derived from the upper mantle by partial melting of mantle peridotites.
Mantle peridotites are sampled as ophiolites in collisional mountain ranges, as xenoliths in basalt or kimberlite, or as abyssal peridotites (sampled from ocean floor). These rocks represent either fertile mantle (lherzolite) or partially depleted mantle (harzburgite, dunite). Alpine peridotites may be either of the ophiolite association and representing the uppermost mantle below ocean basins, or masses of subcontinental mantle emplaced along thrust faults in mountain belts.
Layered peridotites are igneous sediments and form by mechanical accumulation of dense olivine crystals. They form from mantle-derived magmas, such as those of basalt composition. Peridotites associated with Alaskan-type ultramafic complexes are cumulates that probably formed in the root zones of volcanoes. Cumulate peridotites are also formed in komatiite lava flows.
Associated rocks
Komatiites are high temperature partial melts of peridotite characterized by a high degree of partial melting deep below the surface.
Eclogite, a rock similar to basalt in composition, is composed primarily of omphacite (sodic clinopyroxene) and pyrope-rich garnet. Eclogite is associated with peridotite in some xenolith occurrences; it also occurs with peridotite in rocks metamorphosed at high pressures during processes related to subduction.
Economic geology
Peridotite may potentially be used in a low-cost, safe and permanent method of capturing and storing atmospheric CO2 as part of climate change-related greenhouse gas sequestration. It was already known that peridotite reacts with CO2 to form a solid carbonate-like limestone or marble mineral; and this process can be sped up a million times or more by simple drilling and hydraulic fracturing to allow injection of the CO2 into the subsurface peridotite formation.
Peridotite is named for the gemstone peridot, a glassy green gem originally mined on St. John's Island in the Red Sea and now mined on the San Carlos Apache Indian Reservation in Arizona.
Peridotite that has been hydrated at low temperatures is the protolith for serpentinite, which may include chrysotile asbestos (a form of serpentine) and talc.
Layered intrusions with cumulate peridotite are typically associated with sulfide or chromite ores. Sulfides associated with peridotites form nickel ores and platinoid metals; most of the platinum used in the world today is mined from the Bushveld Igneous Complex in South Africa and the Great Dyke of Zimbabwe. The chromite bands found in peridotites are the world's major source of chromium.
| Physical sciences | Igneous rocks | Earth science |
658550 | https://en.wikipedia.org/wiki/P%20%28complexity%29 | P (complexity) | In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.
Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or "tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb.
Definition
A language L is in P if and only if there exists a deterministic Turing machine M, such that
M runs for polynomial time on all inputs
For all x in L, M outputs 1
For all x not in L, M outputs 0
P can also be viewed as a uniform family of Boolean circuits. A language L is in P if and only if there exists a polynomial-time uniform family of Boolean circuits , such that
For all , takes n bits as input and outputs 1 bit
For all x in L,
For all x not in L,
The circuit definition can be weakened to use only a logspace uniform family without changing the complexity class.
Notable problems in P
P is known to contain many natural problems, including the decision versions of linear programming, and finding a maximum matching. In 2002, it was shown that the problem of determining if a number is prime is in P. The related class of function problems is FP.
Several natural problems are complete for P, including st-connectivity (or reachability) on alternating graphs. The article on P-complete problems lists further relevant problems in P.
Relationships to other classes
A generalization of P is NP, which is the class of decision problems decidable by a non-deterministic Turing machine that runs in polynomial time. Equivalently, it is the class of decision problems where each "yes" instance has a polynomial size certificate, and certificates can be checked by a polynomial time deterministic Turing machine. The class of problems for which this is true for the "no" instances is called co-NP. P is trivially a subset of NP and of co-NP; most experts believe it is a proper subset, although this belief (the hypothesis) remains unproven. Another open problem is whether NP = co-NP; since P = co-P, a negative answer would imply .
P is also known to be at least as large as L, the class of problems decidable in a logarithmic amount of memory space. A decider using space cannot use more than time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory by alternating Turing machines. P is also known to be no larger than PSPACE, the class of problems decidable in polynomial space. Again, whether P = PSPACE is an open problem. To summarize:
Here, EXPTIME is the class of problems solvable in exponential time. Of all the classes shown above, only two strict containments are known:
P is strictly contained in EXPTIME. Consequently, all EXPTIME-hard problems lie outside P, and at least one of the containments to the right of P above is strict (in fact, it is widely believed that all three are strict).
L is strictly contained in PSPACE.
The most difficult problems in P are P-complete problems.
Another generalization of P is P/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that an advice string is given that depends only on the length of the input. Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical problems, including all of BPP. If it contains NP, then the polynomial hierarchy collapses to the second level. On the other hand, it also contains some impractical problems, including some undecidable problems such as the unary version of any undecidable problem.
In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Mitsunori Ogihara, showed that if there exists a sparse language that is P-complete, then L = P.
P is contained in BQP; it is unknown whether this containment is strict.
Properties
Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function that is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P is low for itself. This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such as random access, that can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine.
Languages in P are also closed under reversal, intersection, union, concatenation, Kleene closure, inverse homomorphism, and complementation.
Pure existence proofs of polynomial-time algorithms
Some problems are known to be solvable in polynomial time, but no concrete algorithm is known for solving them. For example, the Robertson–Seymour theorem guarantees that there is a finite list of forbidden minors that characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determining whether a graph has a given graph as a minor. This yields a nonconstructive proof that there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem.
Alternative characterizations
In descriptive complexity, P can be described as the problems expressible in FO(LFP), the first-order logic with a least fixed point operator added to it, on ordered structures. In Immerman's 1999 textbook on descriptive complexity, Immerman ascribes this result to Vardi and to Immerman.
It was published in 2001 that PTIME corresponds to (positive) range concatenation grammars.
P can also be defined as an algorithmic complexity class for problems that are not decision problems (even though, for example, finding the solution to a 2-satisfiability instance in polynomial time automatically gives a polynomial algorithm for the corresponding decision problem). In that case P is not a subset of NP, but P∩DEC is, where DEC is the class of decision problems.
History
Kozen states that Cobham and Edmonds are "generally credited with the invention of the notion of polynomial time," though Rabin also invented the notion independently and around the same time (Rabin's paper was in a 1967 proceedings of a 1966 conference, while Cobham's was in a 1965 proceedings of a 1964 conference and Edmonds's was published in a journal in 1965, though Rabin makes no mention of either and was apparently unaware of them). Cobham invented the class as a robust way of characterizing efficient algorithms, leading to Cobham's thesis. However, H. C. Pocklington, in a 1910 paper, analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that ran in (moderately) exponential time.
| Mathematics | Complexity theory | null |
658808 | https://en.wikipedia.org/wiki/Material%20conditional | Material conditional | The material conditional (also known as material implication) is an operation commonly used in logic. When the conditional symbol is interpreted as material implication, a formula is true unless is true and is false. Material implication can also be characterized inferentially by modus ponens, modus tollens, conditional proof, and classical reductio ad absurdum.
Material implication is used in all the basic systems of classical logic as well as some nonclassical logics. It is assumed as a model of correct conditional reasoning within mathematics and serves as the basis for commands in many programming languages. However, many logics replace material implication with other operators such as the strict conditional and the variably strict conditional. Due to the paradoxes of material implication and related problems, material implication is not generally considered a viable analysis of conditional sentences in natural language.
Notation
In logic and related fields, the material conditional is customarily notated with an infix operator . The material conditional is also notated using the infixes and . In the prefixed Polish notation, conditionals are notated as . In a conditional formula , the subformula is referred to as the antecedent and is termed the consequent of the conditional. Conditional statements may be nested such that the antecedent or the consequent may themselves be conditional statements, as in the formula .
History
In Arithmetices Principia: Nova Methodo Exposita (1889), Peano expressed the proposition "If , then " as Ɔ with the symbol Ɔ, which is the opposite of C. He also expressed the proposition as Ɔ . Hilbert expressed the proposition "If A, then B" as in 1918. Russell followed Peano in his Principia Mathematica (1910–1913), in which he expressed the proposition "If A, then B" as . Following Russell, Gentzen expressed the proposition "If A, then B" as . Heyting expressed the proposition "If A, then B" as at first but later came to express it as with a right-pointing arrow. Bourbaki expressed the proposition "If A, then B" as in 1954.
Definitions
Semantics
From a classical semantic perspective, material implication is the binary truth functional operator which returns "true" unless its first argument is true and its second argument is false. This semantics can be shown graphically in a truth table such as the one below. One can also consider the equivalence .
Truth table
The truth table of :
The logical cases where the antecedent is false and is true, are called "vacuous truths".
Examples are ...
... with false: "If Marie Curie is a sister of Galileo Galilei, then Galileo Galilei is a brother of Marie Curie",
... with true: "If Marie Curie is a sister of Galileo Galilei, then Marie Curie has a sibling.".
Deductive definition
Material implication can also be characterized deductively in terms of the following rules of inference.
Modus ponens
Conditional proof
Classical contraposition
Classical reductio ad absurdum
Unlike the semantic definition, this approach to logical connectives permits the examination of structurally identical propositional forms in various logical systems, where somewhat different properties may be demonstrated. For example, in intuitionistic logic, which rejects proofs by contraposition as valid rules of inference, is not a propositional theorem, but the material conditional is used to define negation.
Formal properties
When disjunction, conjunction and negation are classical, material implication validates the following equivalences:
Contraposition:
Import-export:
Negated conditionals:
Or-and-if:
Commutativity of antecedents:
Left distributivity:
Similarly, on classical interpretations of the other connectives, material implication validates the following entailments:
Antecedent strengthening:
Vacuous conditional:
Transitivity:
Simplification of disjunctive antecedents:
Tautologies involving material implication include:
Reflexivity:
Totality:
Conditional excluded middle:
Discrepancies with natural language
Material implication does not closely match the usage of conditional sentences in natural language. For example, even though material conditionals with false antecedents are vacuously true, the natural language statement "If 8 is odd, then 3 is prime" is typically judged false. Similarly, any material conditional with a true consequent is itself true, but speakers typically reject sentences such as "If I have a penny in my pocket, then Paris is in France". These classic problems have been called the paradoxes of material implication. In addition to the paradoxes, a variety of other arguments have been given against a material implication analysis. For instance, counterfactual conditionals would all be vacuously true on such an account.
In the mid-20th century, a number of researchers including H. P. Grice and Frank Jackson proposed that pragmatic principles could explain the discrepancies between natural language conditionals and the material conditional. On their accounts, conditionals denote material implication but end up conveying additional information when they interact with conversational norms such as Grice's maxims. Recent work in formal semantics and philosophy of language has generally eschewed material implication as an analysis for natural-language conditionals. In particular, such work has often rejected the assumption that natural-language conditionals are truth functional in the sense that the truth value of "If P, then Q" is determined solely by the truth values of P and Q. Thus semantic analyses of conditionals typically propose alternative interpretations built on foundations such as modal logic, relevance logic, probability theory, and causal models.
Similar discrepancies have been observed by psychologists studying conditional reasoning, for instance, by the notorious Wason selection task study, where less than 10% of participants reasoned according to the material conditional. Some researchers have interpreted this result as a failure of the participants to conform to normative laws of reasoning, while others interpret the participants as reasoning normatively according to nonclassical laws.
| Mathematics | Mathematical logic | null |
658871 | https://en.wikipedia.org/wiki/Turboshaft | Turboshaft | A turboshaft engine is a form of gas turbine that is optimized to produce shaft horsepower rather than jet thrust. In concept, turboshaft engines are very similar to turbojets, with additional turbine expansion to extract heat energy from the exhaust and convert it into output shaft power. They are even more similar to turboprops, with only minor differences, and a single engine is often sold in both forms.
Turboshaft engines are commonly used in applications that require a sustained high power output, high reliability, small size, and light weight. These include helicopters, auxiliary power units, boats and ships, tanks, hovercraft, and stationary equipment.
Overview
A turboshaft engine may be made up of two major parts assemblies: the 'gas generator' and the 'power section'. The gas generator consists of the compressor, combustion chambers with ignitors and fuel nozzles, and one or more stages of turbine. The power section consists of additional stages of turbines, a gear reduction system, and the shaft output. The gas generator creates the hot expanding gases to drive the power section. Depending on the design, the engine accessories may be driven either by the gas generator or by the power section.
In most designs, the gas generator and power section are mechanically separate so they can each rotate at different speeds appropriate for the conditions, referred to as a 'free power turbine'. A free power turbine can be an extremely useful design feature for vehicles, as it allows the design to forgo the weight and cost of complex multiple-ratio transmissions and clutches.
An unusual example of the turboshaft principle is the Pratt & Whitney F135-PW-600 turbofan engine for the STOVL Lockheed F-35B Lightning II – in conventional mode it operates as a turbofan, but when powering the Rolls-Royce LiftSystem, it switches partially to turboshaft mode to send 29,000 horsepower forward through a shaft and partially to turbofan mode to continue to send thrust to the main engine's fan and rear nozzle.
Large helicopters use two or three turboshaft engines. The Mil Mi-26 uses two Lotarev D-136 at 11,400 hp each, while the Sikorsky CH-53E Super Stallion uses three General Electric T64 at 4,380 hp each.
History
The first gas turbine engine considered for an armoured fighting vehicle, the GT 101 which was based on the BMW 003 turbojet, was tested in a Panther tank in mid-1944.
The first turboshaft engine for rotorcraft was built by the French engine firm Turbomeca, led by its founder Joseph Szydlowski. In 1948, they built the first French-designed turbine engine, the 100-shp 782. Originally conceived as an auxiliary power unit, it was soon adapted to aircraft propulsion, and found a niche as a powerplant for turboshaft-driven helicopters in the 1950s. In 1950, Turbomeca used its work from the 782 to develop the larger 280-shp Artouste, which was widely used on the Aérospatiale Alouette II and other helicopters. This was following the experimental installation of a Boeing T50 turboshaft in an example of the Kaman K-225 synchropter on December 11, 1951, as the world's first-ever turboshaft-powered helicopter of any type to fly.
The T-80 tank, which entered service with the Soviet Army in 1976, was the first tank to use a gas turbine as its main engine. Since 1980 the US Army has operated the M1 Abrams tank, which also has a gas turbine engine. (Most tanks use reciprocating piston diesel engines.) The Swedish Stridsvagn 103 was the first tank to utilize a gas turbine as a secondary, high-horsepower "sprint" engine to augment its primary piston engine's performance. The turboshaft engines used in all these tanks have considerably fewer parts than the piston engines they replace or supplement, mechanically are very reliable, produce reduced exterior noise, and run on virtually any fuel: petrol (gasoline), diesel fuel, and aviation fuels. However, turboshaft engines have significantly higher fuel consumption than the diesel engines that are used in the majority of modern main battle tanks.
| Technology | Aircraft components | null |
659068 | https://en.wikipedia.org/wiki/Mass%20number | Mass number | The mass number (symbol A, from the German word: Atomgewicht, "atomic weight"), also called atomic mass number or nucleon number, is the total number of protons and neutrons (together known as nucleons) in an atomic nucleus. It is approximately equal to the atomic (also known as isotopic) mass of the atom expressed in atomic mass units. Since protons and neutrons are both baryons, the mass number A is identical with the baryon number B of the nucleus (and also of the whole atom or ion). The mass number is different for each isotope of a given chemical element, and the difference between the mass number and the atomic number Z gives the number of neutrons (N) in the nucleus: .
The mass number is written either after the element name or as a superscript to the left of an element's symbol. For example, the most common isotope of carbon is carbon-12, or , which has 6 protons and 6 neutrons. The full isotope symbol would also have the atomic number (Z) as a subscript to the left of the element symbol directly below the mass number: .
Mass number changes in radioactive decay
Different types of radioactive decay are characterized by their changes in mass number as well as atomic number, according to the radioactive displacement law of Fajans and Soddy.
For example, uranium-238 usually decays by alpha decay, where the nucleus loses two neutrons and two protons in the form of an alpha particle. Thus the atomic number and the number of neutrons each decrease by 2 (Z: 92 → 90, N: 146 → 144), so that the mass number decreases by 4 (A = 238 → 234); the result is an atom of thorium-234 and an alpha particle ():
{| border="0"
|- style="height:2em;"
| ||→ || ||+ ||||
|}
On the other hand, carbon-14 decays by beta decay, whereby one neutron is transmuted into a proton with the emission of an electron and an antineutrino. Thus the atomic number increases by 1 (Z: 6 → 7) and the mass number remains the same (A = 14), while the number of neutrons decreases by 1 (N: 8 → 7). The resulting atom is nitrogen-14, with seven protons and seven neutrons:
{| border="0"
|- style="height:2em;"
| ||→ || ||+ || ||+ ||
|}
Beta decay is possible because different isobars have mass differences on the order of a few electron masses. If possible, a nuclide will undergo beta decay to an adjacent isobar with lower mass. In the absence of other decay modes, a cascade of beta decays terminates at the isobar with the lowest atomic mass.
Another type of radioactive decay without change in mass number is emission of a gamma ray from a nuclear isomer or metastable excited state of an atomic nucleus. Since all the protons and neutrons remain in the nucleus unchanged in this process, the mass number is also unchanged.
Mass number and isotopic mass
The mass number gives an estimate of the isotopic mass measured in atomic mass units (u). For 12C, the isotopic mass is exactly 12, since the atomic mass unit is defined as 1/12 of the mass of 12C. For other isotopes, the isotopic mass is usually within 0.1 u of the mass number. For example, 35Cl (17 protons and 18 neutrons) has a mass number of 35 and an isotopic mass of 34.96885. The difference of the actual isotopic mass minus the mass number of an atom is known as the mass excess, which for 35Cl is –0.03115. Mass excess should not be confused with mass defect which is the difference between the mass of an atom and its constituent particles (namely protons, neutrons and electrons).
There are two reasons for mass
excess:
The neutron is slightly heavier than the proton. This increases the mass of nuclei with more neutrons than protons relative to the atomic mass unit scale based on 12C with equal numbers of protons and neutrons.
Nuclear binding energy varies between nuclei. A nucleus with greater binding energy has a lower total energy, and therefore a lower mass according to Einstein's mass–energy equivalence relation E = mc2. For 35Cl, the isotopic mass is less than 35, so this must be the dominant factor.
Relative atomic mass of an element
The mass number should also not be confused with the standard atomic weight (also called atomic weight) of an element, which is the ratio of the average atomic mass of the different isotopes of that element (weighted by abundance) to the atomic mass constant. The atomic weight is a mass ratio, while the mass number is a counted number (and so an integer).
This weighted average can be quite different from the near-integer values for individual isotopic masses. For instance, there are two main isotopes of chlorine: chlorine-35 and chlorine-37. In any given sample of chlorine that has not been subjected to mass separation there will be roughly 75% of chlorine atoms which are chlorine-35 and only 25% of chlorine atoms which are chlorine-37. This gives chlorine a relative atomic mass of 35.5 (actually 35.4527 g/mol).
Moreover, the weighted average mass can be near-integer, but at the same time not corresponding to the mass of any natural isotope. For example, bromine has only two stable isotopes, 79Br and 81Br, naturally present in approximately equal fractions, which leads to the standard atomic mass of bromine close to 80 (79.904 g/mol), even though the isotope 80Br with such mass is unstable.
| Physical sciences | Periodic table | Chemistry |
659325 | https://en.wikipedia.org/wiki/Anesthetic | Anesthetic | An anesthetic (American English) or anaesthetic (British English; see spelling differences) is a drug used to induce anesthesia — in other words, to result in a temporary loss of sensation or awareness. They may be divided into two broad classes: general anesthetics, which result in a reversible loss of consciousness, and local anesthetics, which cause a reversible loss of sensation for a limited region of the body without necessarily affecting consciousness.
A wide variety of drugs are used in modern anesthetic practice. Many are rarely used outside anesthesiology, but others are used commonly in various fields of healthcare. Combinations of anesthetics are sometimes used for their synergistic and additive therapeutic effects. Adverse effects, however, may also be increased. Anesthetics are distinct from analgesics, which block only sensation of painful stimuli. Analgesics are typically used in conjunction with anesthetics to control pre-, intra-, and postoperative pain.
Local anesthetics
Ester-based
Benzocaine
Cocaine (historical)
Procaine
Tetracaine (sometimes called Amethocaine)
Amide-Based
Bupivacaine
Cinchocaine(INN/BAN)/Dibucaine(USAN)
Etidocaine
Levobupivacaine
Lidocaine
Mepivacaine
Prilocaine
Ropivacaine
Local anesthetic agents prevent the transmission of nerve impulses without causing unconsciousness. They act by reversibly binding to fast sodium channels from within nerve fibers, thereby preventing sodium from entering the fibres, stabilising the cell membrane and preventing action potential propagation. Each of the local anesthetics has the suffix "–caine" in their names.
Local anesthetics can be either ester- or amide-based. Ester local anesthetics are generally unstable in solution and fast-acting, are rapidly metabolised by cholinesterases in the blood plasma and liver, and more commonly induce allergic reactions. Amide local anesthetics are generally heat-stable, with a long shelf life (around two years). Amides have a slower onset and longer half-life than ester anesthetics, and are usually racemic mixtures, with the exception of levobupivacaine (which is S(-) -bupivacaine) and ropivacaine (S(-)-ropivacaine). Although general rules exist for onset and duration of anesthesia between ester- or amide-based local anesthetics, these are properties are ultimately dependent on myriad factors including the lipid solubility of the agent, the concentration of the solution, and the pKa. Amides are generally used within regional and epidural or spinal techniques, due to their longer duration of action, which provides adequate analgesia for surgery, labor, and symptomatic relief. Some esters, such as benzocaine and tetracaine, are found in topical formulations to be absorbed through the skin.
Only preservative-free local anesthetic agents may be injected intrathecally.
Pethidine also has local anesthetic properties, in addition to its opioid effects.
General anesthetics
Inhaled agents
Desflurane (common)
Enflurane (largely discontinued)
Halothane (inexpensive, discontinued)
Isoflurane (common)
Methoxyflurane
Nitrous oxide
Sevoflurane (common)
Xenon (rarely used)
Volatile agents are typically organic liquids that evaporate readily. They are given by inhalation for induction or maintenance of general anesthesia. Nitrous oxide and xenon are gases, so they are not considered volatile agents. The ideal volatile anesthetic should be non-flammable, non-explosive, and lipid-soluble. It should possess low blood gas solubility, have no end-organ (heart, liver, kidney) toxicity or side-effects, should not be metabolized, and should not irritate the respiratory pathways.
No anaesthetic agent currently in use meets all these requirements, nor can any anaesthetic agent be considered completely safe. There are inherent risks and drug interactions that are specific to each and every patient. The agents in widespread current use are isoflurane, desflurane, sevoflurane, and nitrous oxide. Nitrous oxide is a common adjuvant gas, making it one of the most long-lived drugs still in current use. Because of its low potency, it cannot produce anesthesia on its own but is frequently combined with other agents. Halothane, an agent introduced in the 1950s, has been almost completely replaced in modern anesthesia practice by newer agents because of its shortcomings. Partly because of its side effects, enflurane never gained widespread popularity.
In theory, any inhaled anesthetic agent can be used for induction of general anesthesia. However, most of the halogenated anesthetics are irritating to the airway, perhaps leading to coughing, laryngospasm and overall difficult inductions. If induction needs to be conducted with an inhaled anesthetic agent, sevoflurane is often used due to a relatively low pungency, rapid increase in alveolar concentration, and a higher blood solubility than other agents. These properties allow for a less irritating and quicker induction as well as a rapid emergence from anesthesia compared to other inhaled agents. All of the volatile agents can be used alone or in combination with other medications to maintain anesthesia (nitrous oxide is not potent enough to be used as a sole agent).
Volatile agents are frequently compared in terms of potency, which is inversely proportional to the minimum alveolar concentration. Potency is directly related to lipid solubility. This is known as the Meyer-Overton hypothesis. However, certain pharmacokinetic properties of volatile agents have become another point of comparison. Most important of those properties is known as the blood/gas partition coefficient. This concept refers to the relative solubility of a given agent in blood. Those agents with a lower blood solubility (i.e., a lower blood–gas partition coefficient; e.g., desflurane) give the anesthesia provider greater rapidity in titrating the depth of anesthesia, and permit a more rapid emergence from the anesthetic state upon discontinuing their administration. In fact, newer volatile agents (e.g., sevoflurane, desflurane) have been popular not due to their potency (minimum alveolar concentration), but due to their versatility for a faster emergence from anesthesia, thanks to their lower blood–gas partition coefficient.
Intravenous agents (non-opioid)
While there are many drugs that can be used intravenously to produce anesthesia or sedation, the most common are:
Barbiturates
Amobarbital (trade name: Amytal)
Methohexital (trade name: Brevital)
Thiamylal (trade name: Surital)
Thiopental (trade name: Penthothal, referred to as thiopentone in the UK)
Benzodiazepines
Diazepam
Lorazepam
Midazolam
Etomidate
Ketamine
Propofol
Among the barbiturates mentioned above, thiopental and methohexital are ultra-short-acting and are used to induce and maintain anesthesia. However, though they produce unconsciousness, they provide no analgesia (pain relief) and must be used with other agents. Benzodiazepines can be used for sedation before or after surgery and can be used to induce and maintain general anesthesia. When benzodiazepines are used to induce general anesthesia, midazolam is preferred. Benzodiazepines are also used for sedation during procedures that do not require general anesthesia. Like barbiturates, benzodiazepines have no pain-relieving properties.
Among the barbiturates mentioned above, thiopental and methohexital are ultra-short-acting and are used to induce and maintain anesthesia is one of the most commonly used intravenous drugs employed to induce and maintain general anesthesia. It can also be used for sedation during procedures or in the ICU. Like the other agents mentioned above, it renders patients unconscious without producing pain relief. Compared to other IV agents, etomidate causes minimal depression of the cardiopulmonary system. Additionally, etomidate results in a reduction in intracranial pressure and cerebral blood flow. Because of these favorable physiological effects, was a favored agent in the ICU. However, etomidate has since been shown to produce adrenocortical suppression, resulting in decreased use to avoid an increased mortality rate in severely ill patients. Ketamine is infrequently used in anesthesia because of the unpleasant experiences that sometimes occur on emergence from anesthesia, which include "vivid dreaming, extracorporeal experiences, and illusions." When it is used, it is often paired with a benzodiazepine such as midazolam for amnesia and sedation. However, like etomidate it is frequently used in emergency settings and with sick patients because it produces fewer adverse physiological effects. Unlike the intravenous anesthetic drugs previously mentioned, ketamine produces profound pain relief, even in doses lower than those that induce general anesthesia. Also unlike the other anesthetic agents in this section, patients who receive ketamine alone appear to be in a cataleptic state, unlike other states of anesthesia that resemble normal sleep. Ketamine-anesthetized patients have profound analgesia but keep their eyes open and maintain many reflexes.
Intravenous opioid analgesic agents
While opioids can produce unconsciousness, they do so unreliably and with significant side effects. So, while they are rarely used to induce anesthesia, they are frequently used along with other agents such as intravenous non-opioid anesthetics or inhalational anesthetics. Furthermore, they are used to relieve pain of patients before, during, or after surgery. The following opioids have short onset and duration of action and are frequently used during general anesthesia:
Alfentanil
Fentanyl
Remifentanil
Sufentanil, which is not available in Australia.
The following agents have longer onset and duration of action and are frequently used for post-operative pain relief:
Buprenorphine
Butorphanol
Diamorphine, also known as heroin, not available for use as an analgesic in any country but the UK.
Hydromorphone
Levorphanol
Pethidine, also called meperidine in North America.
Methadone
Morphine
Codeine
Nalbuphine
Oxycodone, not available intravenously in U.S.
Oxymorphone
Pentazocine
Muscle relaxants
Muscle relaxants do not render patients unconscious or relieve pain. Instead, they are sometimes used after a patient is rendered unconscious (induction of anesthesia) to facilitate intubation or surgery by paralyzing skeletal muscle.These agents fall into two categories: depolarizing agents, which depolarize the motor end plate to prevent further stimulation, and non-depolarizing agents, which prevent acetylcholine receptor activation through competitive inhibition.
Depolarizing muscle relaxants
Succinylcholine (also known as suxamethonium in the UK, New Zealand, Australia and other countries, "Celokurin" or "celo" for short in Europe)
Decamethonium
Non-depolarizing muscle relaxants
Short acting
Mivacurium
Rapacuronium
Intermediate acting
Atracurium
Cisatracurium
Rocuronium
Vecuronium
Long acting
Alcuronium
Doxacurium
Gallamine
Metocurine
Pancuronium
Pipecuronium
Tubocurarine
A potential complication where neuromuscular blockade is employed is 'anesthesia awareness'. In this situation, patients paralyzed may awaken during their anesthesia, due to an inappropriate decrease in the level of drugs providing sedation or pain relief. If this is missed by the anesthesia provider, the patient may be aware of their surroundings, but be incapable of moving or communicating that fact. Neurological monitors are increasingly available that may help decrease the incidence of awareness. Most of these monitors use proprietary algorithms monitoring brain activity via evoked potentials. Additionally, anesthesia providers often have steps they follow to help prevent awareness, such as ensuring all equipment is working properly, monitoring that drugs are being delivered during surgery, and asking a series of questions (the Brice questions) to help detect awareness after surgery. If there is any suspicion of patient awareness, close follow-up and mental health professionals can help manage or avoid any traumatic stress associated with the awareness. Certain procedures, such as endoscopies or colonoscopies, are managed a technique called conscious sedation or monitored anesthesia care. These cases are performed with regional anesthetics and a "twilight sleep" achieved through sedation with propofol and analgesics, and patients may remember perioperative events. When this technique is used, patients should be advised that this is management is distinct from general anesthesia to help combat any belief or fear that they were "awake" during anesthesia.
Intravenous reversal agents
Flumazenil, reverses the effects of benzodiazepines
Naloxone, reverses the effects of opioids
Neostigmine, helps to reverse the effects of non-depolarizing muscle relaxants
Sugammadex, helps to reverse the effects of non-depolarizing muscle relaxants
| Biology and health sciences | Anesthetics | Health |
659899 | https://en.wikipedia.org/wiki/Gravimetric%20analysis | Gravimetric analysis | Gravimetric analysis describes a set of methods used in analytical chemistry for the quantitative determination of an analyte (the ion being analyzed) based on its mass. The principle of this type of analysis is that once an ion's mass has been determined as a unique compound, that known measurement can then be used to determine the same analyte's mass in a mixture, as long as the relative quantities of the other constituents are known.
The four main types of this method of analysis are precipitation, volatilization, electro-analytical and miscellaneous physical method. The methods involve changing the phase of the analyte to separate it in its pure form from the original mixture and are quantitative measurements.
Precipitation method
The precipitation method is the one used for the determination of the amount of calcium in water. Using this method, an excess of oxalic acid, H2C2O4, is added to a measured, known volume of water. By adding a reagent, here ammonium oxalate, the calcium will precipitate as calcium oxalate. The proper reagent, when added to aqueous solution, will produce highly insoluble precipitates from the positive and negative ions that would otherwise be soluble with their counterparts (equation 1).
The reaction is:
Formation of calcium oxalate:
Ca2+(aq) + C2O42- → CaC2O4
The precipitate is collected, dried and ignited to high (red) heat which converts it entirely to calcium oxide.
The reaction is pure calcium oxide formed
CaC2O4 → CaO(s) + CO(g)+ CO2(g)
The pure precipitate is cooled, then measured by weighing, and the difference in weights before and after reveals the mass of analyte lost, in this case calcium oxide. That number can then be used to calculate the amount, or the percent concentration, of it in the original mix.
Volatilization methods
Volatilization methods can be either direct or indirect. Water eliminated in a quantitative manner from many inorganic substances by ignition is an example of a direct determination. It is collected on a solid desiccant and its mass determined by the gain in mass of the desiccant.
Another direct volatilization method involves carbonates which generally decompose to release carbon dioxide when acids are used. Because carbon dioxide is easily evolved when heat is applied, its mass is directly established by the measured increase in the mass of the absorbent solid used.
Determination of the amount of water by measuring the loss in mass of the sample during heating is an example of an indirect method. It is well known that changes in mass occur due to decomposition of many substances when heat is applied, regardless of the presence or absence of water. Because one must make the assumption that water was the only component lost, this method is less satisfactory than direct methods.
This often faulty and misleading assumption has proven to be wrong on more than a few occasions. There are many substances other than water loss that can lead to loss of mass with the addition of heat, as well as a number of other factors that may contribute to it. The widened margin of error created by this all-too-often false assumption is not one to be lightly disregarded as the consequences could be far-reaching.
Nevertheless, the indirect method, although less reliable than direct, is still widely used in commerce. For example, it's used to measure the moisture content of cereals, where a number of imprecise and inaccurate instruments are available for this purpose.
Types of volatilization methods
In volatilization methods, removal of the analyte involves separation by heating or chemically decomposing a volatile sample at a suitable temperature. In other words, thermal or chemical energy is used to precipitate a volatile species. For example, the water content of a compound can be determined by vaporizing the water using thermal energy (heat). Heat can also be used, if oxygen is present, for combustion to isolate the suspect species and obtain the desired results.
The two most common gravimetric methods using volatilization are those for water and carbon dioxide. An example of this method is the isolation of sodium hydrogen bicarbonate (the main ingredient in most antacid tablets) from a mixture of carbonate and bicarbonate. The total amount of this analyte, in whatever form, is obtained by addition of an excess of dilute sulfuric acid to the analyte in solution.
In this reaction, nitrogen gas is introduced through a tube into the flask which contains the solution. As it passes through, it gently bubbles. The gas then exits, first passing a drying agent (here CaSO4, the common desiccant Drierite). It then passes a mixture of the drying agent and sodium hydroxide which lies on asbestos or Ascarite II, a non-fibrous silicate containing sodium hydroxide. The mass of the carbon dioxide is obtained by measuring the increase in mass of this absorbent. This is performed by measuring the difference in weight of the tube in which the ascarite contained before and after the procedure.
The calcium sulfate (CaSO4) in the tube retains carbon dioxide selectively as it's heated, and thereby, removed from the solution. The drying agent absorbs any aerosolized water and/or water vapor (reaction 3.). The mix of the drying agent and NaOH absorbs the CO2 and any water that may have been produced as a result of the absorption of the NaOH (reaction 4.).
The reactions are:
Reaction 3 - absorption of water
NaHCO3(aq) + H2SO4(aq) → CO2(g) + H2O(l) + NaHSO4(aq).
Reaction 4. Absorption of CO2 and residual water
CO2(g) + 2 NaOH(s) → Na2CO3(s) + H2O(l).
Procedure
The sample is dissolved, if it is not already in solution.
The solution may be treated to adjust the pH (so that the proper precipitate is formed, or to suppress the formation of other precipitates). If it is known that species are present which interfere (by also forming precipitates under the same conditions as the analyte), the sample might require treatment with a different reagent to remove these interferents.
The precipitating reagent is added at a concentration that favors the formation of a "good" precipitate (see below). This may require low concentration, extensive heating (often described as "digestion"), or careful control of the pH. Digestion can help reduce the amount of coprecipitation.
After the precipitate has formed and been allowed to "digest", the solution is carefully filtered. The filter is used to collect the precipitate; smaller particles are more difficult to filter.
Depending on the procedure followed, the filter might be a piece of ashless filter paper in a fluted funnel, or a filter crucible. Filter paper is convenient because it does not typically require cleaning before use; however, filter paper can be chemically attacked by some solutions (such as concentrated acid or base), and may tear during the filtration of large volumes of solution.
The alternative is a crucible whose bottom is made of some porous material, such as sintered glass, porcelain or sometimes metal. These are chemically inert and mechanically stable, even at elevated temperatures. However, they must be carefully cleaned to minimize contamination or carryover(cross-contamination). Crucibles are often used with a mat of glass or asbestos fibers to trap small particles.
After the solution has been filtered, it should be tested to make sure that the analyte has been completely precipitated. This is easily done by adding a few drops of the precipitating reagent; if a precipitate is observed, the precipitation is incomplete.
After filtration, the precipitate – including the filter paper or crucible – is heated, or charred. This accomplishes the following:
The remaining moisture is removed (drying).
Secondly, the precipitate is converted to a more chemically stable form. For instance, calcium ion might be precipitated using oxalate ion, to produce calcium oxalate (CaC2O4); it might then be heated to convert it into the oxide (CaO). It is vital that the empirical formula of the weighed precipitate be known, and that the precipitate be pure; if two forms are present, the results will be inaccurate.
The precipitate cannot be weighed with the necessary accuracy in place on the filter paper; nor can the precipitate be completely removed from the filter paper to weigh it. The precipitate can be carefully heated in a crucible until the filter paper has burned away; this leaves only the precipitate. (As the name suggests, "ashless" paper is used so that the precipitate is not contaminated with ash.)
After the precipitate is allowed to cool (preferably in a desiccator to keep it from absorbing moisture), it is weighed (in the crucible). To calculate the final mass of the analyte, the starting mass of the empty crucible is subtracted from the final mass of the crucible containing the sample. Since the composition of the precipitate is known, it is simple to calculate the mass of analyte in the original sample.
Example
A chunk of ore is to be analyzed for sulfur content. It is treated with concentrated nitric acid and potassium chlorate to convert all of the sulfur to sulfate (SO). The nitrate and chlorate are removed by treating the solution with concentrated HCl. The sulfate is precipitated with barium (Ba2+) and weighed as BaSO4.
Advantages
Gravimetric analysis, if methods are followed carefully, provides for exceedingly precise analysis. In fact, gravimetric analysis was used to determine the atomic masses of many elements in the periodic table to six figure accuracy. Gravimetry provides very little room for instrumental error and does not require a series of standards for calculation of an unknown. Also, methods often do not require expensive equipment. Gravimetric analysis, due to its high degree of accuracy, when performed correctly, can also be used to calibrate other instruments in lieu of reference standards. Gravimetric analysis is currently used to allow undergraduate chemistry/Biochemistry students to experience a grad level laboratory and it is a highly effective teaching tool to those who want to attend medical school or any research graduate school.
Disadvantages
Gravimetric analysis usually only provides for the analysis of a single element, or a limited group of elements, at a time. Comparing modern dynamic flash combustion coupled with gas chromatography with traditional combustion analysis will show that the former is both faster and allows for simultaneous determination of multiple elements while traditional determination allowed only for the determination of carbon and hydrogen. Methods are often convoluted and a slight mis-step in a procedure can often mean disaster for the analysis (colloid formation in precipitation gravimetry, for example). Compare this with hardy methods such as spectrophotometry and one will find that analysis by these methods is much more efficient.
Steps in a gravimetric analysis
After appropriate dissolution of the sample the following steps should be followed for successful gravimetric procedure:
1. Preparation of the Solution: This may involve several steps including adjustment of the pH of the solution in order for the precipitate to occur quantitatively and get a precipitate of desired properties, removing interferences, adjusting the volume of the sample to suit the amount of precipitating agent to be added.
2. Precipitation: This requires addition of a precipitating agent solution to the sample solution. Upon addition of the first drops of the precipitating agent, supersaturation occurs, then nucleation starts to occur where every few molecules of precipitate aggregate together forming a nucleus. At this point, addition of extra precipitating agent will either form new nuclei or will build up on existing nuclei to give a precipitate. This can be predicted by Von Weimarn ratio where, according to this relation the particle size is inversely proportional to a quantity called the relative supersaturation where
Relative supersaturation = (Q – S)/S
The Q is the concentration of reactants before precipitation, S is the solubility of precipitate in the medium from which it is being precipitated. Therefore, to get particle growth instead of further nucleation we must make the relative supersaturation ratio as small as possible. The optimum conditions for precipitation which make the supersaturation low are:
a. Precipitation using dilute solutions to decrease Q
b. Slow addition of precipitating agent to keep Q as low as possible
c. Stirring the solution during addition of precipitating agent to avoid concentration sites and keep Q low
d. Increase solubility by precipitation from hot solution
e. Adjust the pH to increase S, but not too much increase np as we do not want to lose precipitate by dissolution
f. Usually add a little excess of the precipitating agent for quantitative precipitation and check for completeness of the precipitation
3. Digestion of the precipitate: The precipitate is left hot (below boiling) for 30 min to one hour for the particles to be digested. Digestion involves dissolution of small particles and reprecipitation on larger ones resulting in particle growth and better precipitate characteristics. This process is called Ostwald ripening. An important advantage of digestion is observed for colloidal precipitates where large amounts of adsorbed ions cover the huge area of the precipitate. Digestion forces the small colloidal particles to agglomerate which decreases their surface area and thus adsorption. You should know that adsorption is a major problem in gravimetry in case of colloidal precipitate since a precipitate tends to adsorb its own ions present in excess, Therefore, forming what is called a primary ion layer which attracts ions from solution forming a secondary or counter ion layer. Individual particles repel each other keeping the colloidal properties of the precipitate. Particle coagulation can be forced by either digestion or addition of a high concentration of a diverse ions strong electrolytic solution in order to shield the charges on colloidal particles and force agglomeration. Usually, coagulated particles return to the colloidal state if washed with water, a process called peptization.
4. Washing and Filtering the Precipitate: It is crucial to wash the precipitate thoroughly to remove all adsorbed species that would add to the weight of the precipitate. One should be careful nor to use too much water since part of the precipitate may be lost. Also, in case of colloidal precipitates we should not use water as a washing solution since peptization would occur. In such situations dilute nitric acid, ammonium nitrate, or dilute acetic acid may be used. Usually, it is a good practice to check for the presence of precipitating agent in the filtrate of the final washing solution. The presence of precipitating agent means that extra washing is required. Filtration should be done in appropriate sized Gooch or ignition filter paper.
5. Drying and Ignition: The purpose of drying (heating at about 120-150 oC in an oven) or ignition in a muffle furnace at temperatures ranging from 600 to 1200 oC is to get a material with exactly known chemical structure so that the amount of analyte can be accurately determined.
6. Precipitation from Homogeneous Solution: To make Q minimum we can, in some situations, generate the precipitating agent in the precipitation medium rather than adding it. For example, to precipitate iron as the hydroxide, we dissolve urea in the sample. Heating of the solution generates hydroxide ions from the hydrolysis of urea. Hydroxide ions are generated at all points in solution and thus there are no sites of concentration. We can also adjust the rate of urea hydrolysis and thus control the hydroxide generation rate. This type of procedure can be very advantageous in case of colloidal precipitates.
Solubility in the presence of diverse ions
As expected from previous information, diverse ions have a screening effect on dissociated ions which leads to extra dissociation. Solubility will show a clear increase in presence of diverse ions as the solubility product will increase. Look at the following example:
Find the solubility of AgCl (Ksp = 1.0 x 10−10) in 0.1 M NaNO3. The activity coefficients for silver and chloride are 0.75 and 0.76, respectively.
AgCl(s) = Ag+ + Cl−
We can no longer use the thermodynamic equilibrium constant (i.e. in absence of diverse ions) and we have to consider the concentration equilibrium constant or use activities instead of concentration if we use Kth:
Ksp = aAg+ aCl−
Ksp = [Ag+] fAg+ [Cl−] fCl−
1.0 x 10−10 = s x 0.75 x s x 0.76
s = 1.3 x 10−5 M
We have calculated the solubility of AgCl in pure water to be 1.0 x 10−5 M, if we compare this value to that obtained in presence of diverse ions we see % increase in solubility = {(1.3 x 10−5 – 1.0 x 10−5) / 1.0 x 10−5} x 100 = 30%
Therefore, once again we have an evidence for an increase in dissociation or a shift of equilibrium to right in presence of diverse ions.
| Physical sciences | Chemical methods | Chemistry |
659939 | https://en.wikipedia.org/wiki/Square | Square | In Euclidean geometry, a square is a regular quadrilateral, which means that it has four straight sides of equal length and four equal angles (90-degree angles, /2 radian angles, or right angles). It can also be defined as a rectangle with two equal-length adjacent sides. It is the only regular polygon whose internal angle, central angle, and external angle are all equal (90°). A square with vertices ABCD would be denoted .
Characterizations
A quadrilateral is a square if and only if it is any one of the following:
A rectangle with two adjacent equal sides
A rhombus with a right vertex angle
A rhombus with all angles equal
A parallelogram with one right vertex angle and two adjacent equal sides
A quadrilateral with four equal sides and four right angles
A quadrilateral where the diagonals are equal, and are the perpendicular bisectors of each other (i.e., a rhombus with equal diagonals)
A convex quadrilateral with successive sides a, b, c, d whose area is
Properties
A square is a special case of a rhombus (equal sides, opposite equal angles), a kite (two pairs of adjacent equal sides), a trapezoid (one pair of opposite sides parallel), a parallelogram (all opposite sides parallel), a quadrilateral or tetragon (four-sided polygon), and a rectangle (opposite sides equal, right-angles), and therefore has all the properties of all these shapes, namely:
All four internal angles of a square are equal (each being 360°/4 = 90°, a right angle).
The central angle of a square is equal to 90° (360°/4).
The external angle of a square is equal to 90°.
The diagonals of a square are equal and bisect each other, meeting at 90°.
The diagonal of a square bisects its internal angle, forming adjacent angles of 45°.
All four sides of a square are equal.
Opposite sides of a square are parallel.
A square has Schläfli symbol {4}. A truncated square, t{4}, is an octagon, {8}. An alternated square, h{4}, is a digon, {2}. The square is the n = 2 case of the families of n-hypercubes and n-orthoplexes.
Perimeter and area
The perimeter of a square whose four sides have length is
and the area A is
Since four squared equals sixteen, a four by four square has an area equal to its perimeter. The only other quadrilateral with such a property is that of a three by six rectangle.
In classical times, the second power was described in terms of the area of a square, as in the above formula. This led to the use of the term square to mean raising to the second power.
The area can also be calculated using the diagonal d according to
In terms of the circumradius R, the area of a square is
since the area of the circle is the square fills of its circumscribed circle.
In terms of the inradius r, the area of the square is
hence the area of the inscribed circle is of that of the square.
Because it is a regular polygon, a square is the quadrilateral of least perimeter enclosing a given area. Dually, a square is the quadrilateral containing the largest area within a given perimeter. Indeed, if A and P are the area and perimeter enclosed by a quadrilateral, then the following isoperimetric inequality holds:
with equality if and only if the quadrilateral is a square.
Other facts
The diagonals of a square are (about 1.414) times the length of a side of the square. This value, known as the square root of 2 or Pythagoras' constant, was the first number proven to be irrational.
A square can also be defined as a parallelogram with equal diagonals that bisect the angles.
If a figure is both a rectangle (right angles) and a rhombus (equal edge lengths), then it is a square.
A square has a larger area than any other quadrilateral with the same perimeter.
A square tiling is one of three regular tilings of the plane (the others are the equilateral triangle and the regular hexagon).
The square is in two families of polytopes in two dimensions: hypercube and the cross-polytope. The Schläfli symbol for the square is {4}.
The square is a highly symmetric object. There are four lines of reflectional symmetry and it has rotational symmetry of order 4 (through 90°, 180° and 270°). Its symmetry group is the dihedral group D4.
A square can be inscribed inside any regular polygon. The only other polygon with this property is the equilateral triangle.
If the inscribed circle of a square ABCD has tangency points E on AB, F on BC, G on CD, and H on DA, then for any point P on the inscribed circle,
If is the distance from an arbitrary point in the plane to the i-th vertex of a square and is the circumradius of the square, then
If and are the distances from an arbitrary point in the plane to the centroid of the square and its four vertices respectively, then
and
where is the circumradius of the square.
Coordinates and equations
The coordinates for the vertices of a square with vertical and horizontal sides, centered at the origin and with side length 2 are (±1, ±1), while the interior of this square consists of all points (xi, yi) with and . The equation
specifies the boundary of this square. This equation means "x2 or y2, whichever is larger, equals 1." The circumradius of this square (the radius of a circle drawn through the square's vertices) is half the square's diagonal, and is equal to Then the circumcircle has the equation
Alternatively the equation
can also be used to describe the boundary of a square with center coordinates (a, b), and a horizontal or vertical radius of r. The square is therefore the shape of a topological ball according to the L1 distance metric.
Construction
The following animations show how to construct a square using a compass and straightedge. This is possible as 4 = 22, a power of two.
Symmetry
The square has Dih4 symmetry, order 8. There are 2 dihedral subgroups: Dih2, Dih1, and 3 cyclic subgroups: Z4, Z2, and Z1.
A square is a special case of many lower symmetry quadrilaterals:
A rectangle with two adjacent equal sides
A quadrilateral with four equal sides and four right angles
A parallelogram with one right angle and two adjacent equal sides
A rhombus with a right angle
A rhombus with all angles equal
A rhombus with equal diagonals
These 6 symmetries express 8 distinct symmetries on a square. John Conway labels these by a letter and group order.
Each subgroup symmetry allows one or more degrees of freedom for irregular quadrilaterals. r8 is full symmetry of the square, and a1 is no symmetry. d4 is the symmetry of a rectangle, and p4 is the symmetry of a rhombus. These two forms are duals of each other, and have half the symmetry order of the square. d2 is the symmetry of an isosceles trapezoid, and p2 is the symmetry of a kite. g2 defines the geometry of a parallelogram.
Only the g4 subgroup has no degrees of freedom, but can be seen as a square with directed edges.
Squares inscribed in triangles
Every acute triangle has three inscribed squares (squares in its interior such that all four of a square's vertices lie on a side of the triangle, so two of them lie on the same side and hence one side of the square coincides with part of a side of the triangle). In a right triangle two of the squares coincide and have a vertex at the triangle's right angle, so a right triangle has only two distinct inscribed squares. An obtuse triangle has only one inscribed square, with a side coinciding with part of the triangle's longest side.
The fraction of the triangle's area that is filled by the square is no more than 1/2.
Squaring the circle
Squaring the circle, proposed by ancient geometers, is the problem of constructing a square with the same area as a given circle, by using only a finite number of steps with compass and straightedge.
In 1882, the task was proven to be impossible as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients.
Non-Euclidean geometry
In non-Euclidean geometry, squares are more generally polygons with 4 equal sides and equal angles.
In spherical geometry, a square is a polygon whose edges are great circle arcs of equal distance, which meet at equal angles. Unlike the square of plane geometry, the angles of such a square are larger than a right angle. Larger spherical squares have larger angles.
In hyperbolic geometry, squares with right angles do not exist. Rather, squares in hyperbolic geometry have angles of less than right angles. Larger hyperbolic squares have smaller angles.
Examples:
Crossed square
A crossed square is a faceting of the square, a self-intersecting polygon created by removing two opposite edges of a square and reconnecting by its two diagonals. It has half the symmetry of the square, Dih2, order 4. It has the same vertex arrangement as the square, and is vertex-transitive. It appears as two 45-45-90 triangles with a common vertex, but the geometric intersection is not considered a vertex.
A crossed square is sometimes likened to a bow tie or butterfly. the crossed rectangle is related, as a faceting of the rectangle, both special cases of crossed quadrilaterals.
The interior of a crossed square can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise.
A square and a crossed square have the following properties in common:
Opposite sides are equal in length.
The two diagonals are equal in length.
It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°).
It exists in the vertex figure of a uniform star polyhedra, the tetrahemihexahedron.
Graphs
The K4 complete graph is often drawn as a square with all 6 possible edges connected, hence appearing as a square with both diagonals drawn. This graph also represents an orthographic projection of the 4 vertices and 6 edges of the regular 3-simplex (tetrahedron).
| Mathematics | Geometry | null |
659942 | https://en.wikipedia.org/wiki/Square%20%28algebra%29 | Square (algebra) | In mathematics, a square is the result of multiplying a number by itself. The verb "to square" is used to denote this operation. Squaring is the same as raising to the power 2, and is denoted by a superscript 2; for instance, the square of 3 may be written as 32, which is the number 9.
In some cases when superscripts are not available, as for instance in programming languages or plain text files, the notations x^2 (caret) or x**2 may be used in place of x2.
The adjective which corresponds to squaring is quadratic.
The square of an integer may also be called a square number or a perfect square. In algebra, the operation of squaring is often generalized to polynomials, other expressions, or values in systems of mathematical values other than the numbers. For instance, the square of the linear polynomial is the quadratic polynomial .
One of the important properties of squaring, for numbers as well as in many other mathematical systems, is that (for all numbers ), the square of is the same as the square of its additive inverse . That is, the square function satisfies the identity . This can also be expressed by saying that the square function is an even function.
In real numbers
The squaring operation defines a real function called the or the . Its domain is the whole real line, and its image is the set of nonnegative real numbers.
The square function preserves the order of positive numbers: larger numbers have larger squares. In other words, the square is a monotonic function on the interval . On the negative numbers, numbers with greater absolute value have greater squares, so the square is a monotonically decreasing function on . Hence, zero is the (global) minimum of the square function.
The square of a number is less than (that is ) if and only if , that is, if belongs to the open interval . This implies that the square of an integer is never less than the original number .
Every positive real number is the square of exactly two numbers, one of which is strictly positive and the other of which is strictly negative. Zero is the square of only one number, itself. For this reason, it is possible to define the square root function, which associates with a non-negative real number the non-negative number whose square is the original number.
No square root can be taken of a negative number within the system of real numbers, because squares of all real numbers are non-negative. The lack of real square roots for the negative numbers can be used to expand the real number system to the complex numbers, by postulating the imaginary unit , which is one of the square roots of −1.
The property "every non-negative real number is a square" has been generalized to the notion of a real closed field, which is an ordered field such that every non-negative element is a square and every polynomial of odd degree has a root. The real closed fields cannot be distinguished from the field of real numbers by their algebraic properties: every property of the real numbers, which may be expressed in first-order logic (that is expressed by a formula in which the variables that are quantified by ∀ or ∃ represent elements, not sets), is true for every real closed field, and conversely every property of the first-order logic, which is true for a specific real closed field is also true for the real numbers.
In geometry
There are several major uses of the square function in geometry.
The name of the square function shows its importance in the definition of the area: it comes from the fact that the area of a square with sides of length is equal to . The area depends quadratically on the size: the area of a shape times larger is times greater. This holds for areas in three dimensions as well as in the plane: for instance, the surface area of a sphere is proportional to the square of its radius, a fact that is manifested physically by the inverse-square law describing how the strength of physical forces such as gravity varies according to distance.
The square function is related to distance through the Pythagorean theorem and its generalization, the parallelogram law. Euclidean distance is not a smooth function: the three-dimensional graph of distance from a fixed point forms a cone, with a non-smooth point at the tip of the cone. However, the square of the distance (denoted or ), which has a paraboloid as its graph, is a smooth and analytic function.
The dot product of a Euclidean vector with itself is equal to the square of its length: . This is further generalised to quadratic forms in linear spaces via the inner product. The inertia tensor in mechanics is an example of a quadratic form. It demonstrates a quadratic relation of the moment of inertia to the size (length).
There are infinitely many Pythagorean triples, sets of three positive integers such that the sum of the squares of the first two equals the square of the third. Each of these triples gives the integer sides of a right triangle.
In abstract algebra and number theory
The square function is defined in any field or ring. An element in the image of this function is called a square, and the inverse images of a square are called square roots.
The notion of squaring is particularly important in the finite fields Z/pZ formed by the numbers modulo an odd prime number . A non-zero element of this field is called a quadratic residue if it is a square in Z/pZ, and otherwise, it is called a quadratic non-residue. Zero, while a square, is not considered to be a quadratic residue. Every finite field of this type has exactly quadratic residues and exactly quadratic non-residues. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory.
More generally, in rings, the square function may have different properties that are sometimes used to classify rings.
Zero may be the square of some non-zero elements. A commutative ring such that the square of a non zero element is never zero is called a reduced ring. More generally, in a commutative ring, a radical ideal is an ideal such that implies . Both notions are important in algebraic geometry, because of Hilbert's Nullstellensatz.
An element of a ring that is equal to its own square is called an idempotent. In any ring, 0 and 1 are idempotents. There are no other idempotents in fields and more generally in integral domains. However,
the ring of the integers modulo has idempotents, where is the number of distinct prime factors of .
A commutative ring in which every element is equal to its square (every element is idempotent) is called a Boolean ring; an example from computer science is the ring whose elements are binary numbers, with bitwise AND as the multiplication operation and bitwise XOR as the addition operation.
In a totally ordered ring, for any . Moreover, if and only if .
In a supercommutative algebra where 2 is invertible, the square of any odd element equals zero.
If A is a commutative semigroup, then one has
In the language of quadratic forms, this equality says that the square function is a "form permitting composition". In fact, the square function is the foundation upon which other quadratic forms are constructed which also permit composition. The procedure was introduced by L. E. Dickson to produce the octonions out of quaternions by doubling. The doubling method was formalized by A. A. Albert who started with the real number field and the square function, doubling it to obtain the complex number field with quadratic form , and then doubling again to obtain quaternions. The doubling procedure is called the Cayley–Dickson construction, and has been generalized to form algebras of dimension 2n over a field F with involution.
The square function z2 is the "norm" of the composition algebra , where the identity function forms a trivial involution to begin the Cayley–Dickson constructions leading to bicomplex, biquaternion, and bioctonion composition algebras.
In complex numbers
On complex numbers, the square function is a twofold cover in the sense that each non-zero complex number has exactly two square roots.
The square of the absolute value of a complex number is called its absolute square, squared modulus, or squared magnitude. It is the product of the complex number with its complex conjugate, and equals the sum of the squares of the real and imaginary parts of the complex number.
The absolute square of a complex number is always a nonnegative real number, that is zero if and only if the complex number is zero. It is easier to compute than the absolute value (no square root), and is a smooth real-valued function. Because of these two properties, the absolute square is often preferred to the absolute value for explicit computations and when methods of mathematical analysis are involved (for example optimization or integration).
For complex vectors, the dot product can be defined involving the conjugate transpose, leading to the squared norm.
Other uses
Squares are ubiquitous in algebra, more generally, in almost every branch of mathematics, and also in physics where many units are defined using squares and inverse squares: see below.
Least squares is the standard method used with overdetermined systems.
Squaring is used in statistics and probability theory in determining the standard deviation of a set of values, or a random variable. The deviation of each value from the mean of the set is defined as the difference . These deviations are squared, then a mean is taken of the new set of numbers (each of which is positive). This mean is the variance, and its square root is the standard deviation.
| Mathematics | Specific functions | null |
660657 | https://en.wikipedia.org/wiki/Rankine%20cycle | Rankine cycle | The Rankine cycle is an idealized thermodynamic cycle describing the process by which certain heat engines, such as steam turbines or reciprocating steam engines, allow mechanical work to be extracted from a fluid as it moves between a heat source and heat sink. The Rankine cycle is named after William John Macquorn Rankine, a Scottish polymath professor at Glasgow University.
Heat energy is supplied to the system via a boiler where the working fluid (typically water) is converted to a high-pressure gaseous state (steam) in order to turn a turbine. After passing over the turbine the fluid is allowed to condense back into a liquid state as waste heat energy is rejected before being returned to boiler, completing the cycle. Friction losses throughout the system are often neglected for the purpose of simplifying calculations as such losses are usually much less significant than thermodynamic losses, especially in larger systems.
Description
The Rankine cycle closely describes the process by which steam engines commonly found in thermal power generation plants harness the thermal energy of a fuel or other heat source to generate electricity. Possible heat sources include combustion of fossil fuels such as coal, natural gas, and oil, use of mined resources for nuclear fission, renewable fuels like biomass and ethanol, and energy capture of natural sources such as concentrated solar power and geothermal energy. Common heat sinks include ambient air above or around a facility and bodies of water such as rivers, ponds, and oceans.
The ability of a Rankine engine to harness energy depends on the relative temperature difference between the heat source and heat sink. The greater the differential, the more mechanical power can be efficiently extracted out of heat energy, as per Carnot's theorem.
The efficiency of the Rankine cycle is limited by the high heat of vaporization of the working fluid. Unless the pressure and temperature reach supercritical levels in the boiler, the temperature range over which the cycle can operate is quite small. As of 2022, most supercritical power plants adopt a steam inlet pressure of 24.1 MPa and inlet temperature between 538°C and 566°C, which results in plant efficiency of 40%. However, if pressure is further increased to 31 MPa the power plant is referred to as ultra-supercritical, and one can increase the steam inlet temperature to 600°C, thus achieving a thermal efficiency of 42%. This low steam turbine entry temperature (compared to a gas turbine) is why the Rankine (steam) cycle is often used as a bottoming cycle to recover otherwise rejected heat in combined-cycle gas turbine power stations. The idea is that very hot combustion products are first expanded in a gas turbine, and then the exhaust gases, which are still relatively hot, are used as a heat source for the Rankine cycle, thus reducing the temperature difference between the heat source and the working fluid and therefore reducing the amount of entropy generated by irreversibility.
Rankine engines generally operate in a closed loop in which the working fluid is reused. The water vapor with condensed droplets often seen billowing from power stations is created by the cooling systems (not directly from the closed-loop Rankine power cycle). This "exhaust" heat is represented by the "Qout" flowing out of the lower side of the cycle shown in the T–s diagram below. Cooling towers operate as large heat exchangers by absorbing the latent heat of vaporization of the working fluid and simultaneously evaporating cooling water to the atmosphere.
While many substances can be used as the working fluid, water is usually chosen for its simple chemistry, relative abundance, low cost, and thermodynamic properties. By condensing the working steam vapor to a liquid, the pressure at the turbine outlet is lowered, and the energy required by the feed pump consumes only 1% to 3% of the turbine output power. These factors contribute to a higher efficiency for the cycle. The benefit of this is offset by the low temperatures of steam admitted to the turbine(s). Gas turbines, for instance, have turbine entry temperatures approaching 1500 °C. However, the thermal efficiencies of actual large steam power stations and large modern gas turbine stations are similar.
The four processes in the Rankine cycle
There are four processes in the Rankine cycle. The states are identified by numbers (in brown) in the T–s diagram.
In an ideal Rankine cycle the pump and turbine would be isentropic: i.e., the pump and turbine would generate no entropy and would hence maximize the net work output. Processes 1–2 and 3–4 would be represented by vertical lines on the T–s diagram and more closely resemble that of the Carnot cycle. The Rankine cycle shown here prevents the state of the working fluid from ending up in the superheated vapor region after the expansion in the turbine,
which reduces the energy removed by the condensers.
The actual vapor power cycle differs from the ideal Rankine cycle because of irreversibilities in the inherent components caused by fluid friction and heat loss to the surroundings; fluid friction causes pressure drops in the boiler, the condenser, and the piping between the components, and as a result the steam leaves the boiler at a lower pressure; heat loss reduces the net work output, thus heat addition to the steam in the boiler is required to maintain the same level of net work output.
Variables
Equations
defines the thermodynamic efficiency of the cycle as the ratio of net power output to heat input. As the work required by the pump is often around 1% of the turbine work output, it can be simplified:
Each of the next four equations is derived from the energy and mass balance for a control volume.
When dealing with the efficiencies of the turbines and pumps, an adjustment to the work terms must be made:
Real Rankine cycle (non-ideal)
In a real power-plant cycle (the name "Rankine" cycle is used only for the ideal cycle), the compression by the pump and the expansion in the turbine are not isentropic. In other words, these processes are non-reversible, and entropy is increased during the two processes. This somewhat increases the power required by the pump and decreases the power generated by the turbine.
In particular, the efficiency of the steam turbine will be limited by water-droplet formation. As the water condenses, water droplets hit the turbine blades at high speed, causing pitting and erosion, gradually decreasing the life of turbine blades and efficiency of the turbine. The easiest way to overcome this problem is by superheating the steam. On the T–s diagram above, state 3 is at a border of the two-phase region of steam and water, so after expansion the steam will be very wet. By superheating, state 3 will move to the right (and up) in the diagram and hence produce a drier steam after expansion.
Variations of the basic Rankine cycle
The overall thermodynamic efficiency can be increased by raising the average heat input temperature
of that cycle. Increasing the temperature of the steam into the superheat region is a simple way of doing this. There are also variations of the basic Rankine cycle designed to raise the thermal efficiency of the cycle in this way; two of these are described below.
Rankine cycle with reheat
The purpose of a reheating cycle is to remove the moisture carried by the steam at the final stages of the expansion process. In this variation, two turbines work in series. The first accepts vapor from the boiler at high pressure. After the vapor has passed through the first turbine, it re-enters the boiler and is reheated before passing through a second, lower-pressure, turbine. The reheat temperatures are very close or equal to the inlet temperatures, whereas the optimal reheat pressure needed is only one fourth of the original boiler pressure. Among other advantages, this prevents the vapor from condensing during its expansion and thereby reducing the damage in the turbine blades, and improves the efficiency of the cycle, because more of the heat flow into the cycle occurs at higher temperature. The reheat cycle was first introduced in the 1920s, but was not operational for long due to technical difficulties. In the 1940s, it was reintroduced with the increasing manufacture of high-pressure boilers, and eventually double reheating was introduced in the 1950s. The idea behind double reheating is to increase the average temperature. It was observed that more than two stages of reheating are generally unnecessary, since the next stage increases the cycle efficiency only half as much as the preceding stage. Today, double reheating is commonly used in power plants that operate under supercritical pressure.
Regenerative Rankine cycle
The regenerative Rankine cycle is so named because after emerging from the condenser (possibly as a subcooled liquid) the working fluid is heated by steam tapped from the hot portion of the cycle. On the diagram shown, the fluid at 2 is mixed with the fluid at 4 (both at the same pressure) to end up with the saturated liquid at 7. This is called "direct-contact heating". The Regenerative Rankine cycle (with minor variants) is commonly used in real power stations.
Another variation sends bleed steam from between turbine stages to feedwater heaters to preheat the water on its way from the condenser to the boiler. These heaters do not mix the input steam and condensate, function as an ordinary tubular heat exchanger, and are named "closed feedwater heaters".
Regeneration increases the cycle heat input temperature by eliminating the addition of heat from the boiler/fuel source at the relatively low feedwater temperatures that would exist without regenerative feedwater heating. This improves the efficiency of the cycle, as more of the heat flow into the cycle occurs at higher temperature.
Organic Rankine cycle
The organic Rankine cycle (ORC) uses an organic fluid such as n-pentane or toluene in place of water and steam. This allows use of lower-temperature heat sources, such as solar ponds, which typically operate at around 70 –90 °C. The efficiency of the cycle is much lower as a result of the lower temperature range, but this can be worthwhile because of the lower cost involved in gathering heat at this lower temperature. Alternatively, fluids can be used that have boiling points above water, and this may have thermodynamic benefits (See, for example, mercury vapour turbine). The properties of the actual working fluid have great influence on the quality of steam (vapour) after the expansion step, influencing the design of the whole cycle.
The Rankine cycle does not restrict the working fluid in its definition, so the name "organic cycle" is simply a marketing concept and the cycle should not be regarded as a separate thermodynamic cycle.
Supercritical Rankine cycle
The Rankine cycle applied using a supercritical fluid combines the concepts of heat regeneration and supercritical Rankine cycle into a unified process called the regenerative supercritical cycle (RGSC). It is optimised for temperature sources 125–450 °C.
| Physical sciences | Thermodynamics | Physics |
660870 | https://en.wikipedia.org/wiki/Doxycycline | Doxycycline | Doxycycline is a broad-spectrum antibiotic of the tetracycline class used in the treatment of infections caused by bacteria and certain parasites. It is used to treat bacterial pneumonia, acne, chlamydia infections, Lyme disease, cholera, typhus, and syphilis. It is also used to prevent malaria. Doxycycline may be taken by mouth or by injection into a vein.
Common side effects include diarrhea, nausea, vomiting, abdominal pain, and an increased risk of sunburn. Use during pregnancy is not recommended. Like other agents of the tetracycline class, it either slows or kills bacteria by inhibiting protein production. It kills malaria by targeting a plastid organelle, the apicoplast.
Doxycycline was patented in 1957 and came into commercial use in 1967. It is on the World Health Organization's List of Essential Medicines. Doxycycline is available as a generic medicine. In 2022, it was the 68th most commonly prescribed medication in the United States, with more than 9million prescriptions.
Medical uses
In addition to the general indications for all members of the tetracycline antibiotics group, doxycycline is frequently used to treat Lyme disease, chronic prostatitis, sinusitis, pelvic inflammatory disease, severe acne, rosacea, and rickettsial infections. The efficiency of oral doxycycline for treating papulopustular rosacea and adult acne is not solely based on its antibiotic properties, but also on its anti-inflammatory and anti-angiogenic properties.
In Canada, in 2004, doxycycline was considered a first-line treatment for chlamydia and non-gonococcal urethritis and with cefixime for uncomplicated gonorrhea.
Antibacterial
General indications
Doxycycline is a broad-spectrum antibiotic that is employed in the treatment of numerous bacterial infections. It is effective against bacteria such as Moraxella catarrhalis, Brucella melitensis, Chlamydia pneumoniae, and Mycoplasma pneumoniae. Additionally, doxycycline is used in the prevention and treatment of serious conditions like anthrax, leptospirosis, bubonic plague, and Lyme disease. However, some bacteria, including Haemophilus spp., Mycoplasma hominis, and Pseudomonas aeruginosa, have shown resistance to doxycycline. It is also effective against Yersinia pestis (the infectious agent of bubonic plague), and is prescribed for the treatment of Lyme disease, ehrlichiosis, and Rocky Mountain spotted fever.
Specifically, doxycycline is indicated for treatment of the following diseases:
Rocky Mountain spotted fever, typhus fever and the typhus group, scrub typhus, Q fever, rickettsialpox, and tick fevers caused by Rickettsia,
respiratory tract infections caused by Mycoplasma pneumoniae,
Lymphogranuloma venereum, trachoma, inclusion conjunctivitis, and uncomplicated urethral, endocervical, or rectal infections in adults caused by Chlamydia trachomatis,
psittacosis,
non-gonococcal urethritis caused by Ureaplasma urealyticum,
relapsing fever due to Borrelia recurrentis,
chancroid caused by Haemophilus ducreyi,
plague due to Yersinia pestis,
tularemia,
cholera,
campylobacter fetus infections,
brucellosis caused by Brucella species (in conjunction with streptomycin),
bartonellosis,
granuloma inguinale (Klebsiella species),
Lyme disease (Borrelia species).
Gram-negative bacteria specific indications
When bacteriologic testing indicates appropriate susceptibility to the drug, doxycycline may be used to treat these infections caused by Gram-negative bacteria:
Escherichia coli infections,
Enterobacter aerogenes (formerly Aerobacter aerogenes) infections,
Shigella species infections,
Acinetobacter species (formerly Mima species and Herellea species) infections,
respiratory tract infections caused by Haemophilus influenzae,
respiratory tract and urinary tract infections caused by Klebsiella species.
Gram-positive bacteria specific indications
Some Gram-positive bacteria have developed resistance to doxycycline. Up to 44% of Streptococcus pyogenes and up to 74% of S. faecalis specimens have developed resistance to the tetracycline group of antibiotics. Up to 57% of P. acnes strains developed resistance to doxycycline. When bacteriologic testing indicates appropriate susceptibility to the drug, doxycycline may be used to treat these infections caused by Gram-positive bacteria:
upper respiratory infections caused by Streptococcus pneumoniae (formerly Diplococcus pneumoniae),
skin and soft tissue infections caused by Staphylococcus aureus, including methicillin-resistant Staphylococcus aureus infections,
anthrax caused by Bacillus anthracis infection.
Specific applications of doxycycline when penicillin is contraindicated
When penicillin is contraindicated, doxycycline can be used to treat:
syphilis caused by Treponema pallidum,
yaws caused by Treponema pertenue,
listeriosis due to Listeria monocytogenes,
Vincent's infection caused by Fusobacterium fusiforme,
actinomycosis caused by Actinomyces israelii,
infections caused by Clostridium species.
Use as adjunctive therapy
Doxycycline may also be used as adjunctive therapy for severe acne.
Subantimicrobial-dose doxycycline (SDD) is widely used as an adjunctive treatment to scaling and root planing for periodontitis. Significant differences were observed for all investigated clinical parameters of periodontitis in favor of the scaling and root planing + SDD group where SDD dosage regimens is 20 mg twice daily for three months in a meta-analysis published in 2011. SDD is also used to treat skin conditions such as acne and rosacea, including ocular rosacea. In ocular rosacea, treatment period is 2 to 3 months. After discontinuation of doxycycline, recurrences may occur within three months; therefore, many studies recommend either slow tapering or treatment with a lower dose over a longer period of time.
Doxycycline is used as an adjunctive therapy for acute intestinal amebiasis.
Doxycycline is also used as an adjunctive therapy for chancroid.
As prophylaxis against sexually transmitted infections
Doxycycline is used for post-exposure prophylaxis (PEP) to reduce the incidence of sexually transmitted bacterial infections (STIs), but it has been associated with tetracycline resistance in associated species, in particular, in Neisseria gonorrhoeae. For this reason, the Australian consensus statement mentions that doxycycline for PEP particularly in gay, bisexual, and other men who have sex with men (GBMSM) should be considered only for the prevention of syphilis in GBMSM, and that the risk of increasing antimicrobial resistance outweighed any potential benefit from reductions in other bacterial STIs in GBMSM.
Appropriate use of doxycycline for PEP is supported by guidelines from the US Centers for Disease Control and Prevention (CDC) and the Australasian Society for HIV Medicine.
Use in combination
The first-line treatment for brucellosis is a combination of doxycycline and streptomycin. The second-line is a combination of doxycycline and rifampicin (rifampin).
Antimalarial
Doxycycline is active against the erythrocytic stages of Plasmodium falciparum but not against the gametocytes of P. falciparum. It is used to prevent malaria. It is not recommended alone for initial treatment of malaria, even when the parasite is doxycycline-sensitive, because the antimalarial effect of doxycycline is delayed.
Doxycycline blocks protein production in apicoplast (an organelle) of P. falciparum—such blocking leads to two main effects: it disrupts the parasite's ability to produce fatty acids, which are essential for its growth, and it impairs the production of heme, a cofactor. These effects occur late in the parasite's life cycle when it is in the blood stage, causing the symptoms of malaria. By blocking important processes in the parasite, doxycycline both inhibits the growth and prevents the multiplication of P. falciparum. It does not directly kill the living organisms of P. falciparum but creates conditions that prevent their growth and replication.
The World Health Organization (WHO) guidelines state that the combination of doxycycline with either artesunate or quinine may be used for the treatment of uncomplicated malaria due to P. falciparum or following intravenous treatment of severe malaria.
Antihelminthic
Doxycycline kills the symbiotic Wolbachia bacteria in the reproductive tracts of parasitic filarial nematodes, making the nematodes sterile, and thus reducing transmission of diseases such as onchocerciasis and elephantiasis. Field trials in 2005 showed an eight-week course of doxycycline almost eliminates the release of microfilariae.
Spectrum of susceptibility
Doxycycline has been used successfully to treat sexually transmitted, respiratory, and ophthalmic infections. Representative pathogenic genera include Chlamydia, Streptococcus, Ureaplasma, Mycoplasma, and others. The following represents minimum inhibitory concentration susceptibility data for a few medically significant microorganisms.
Chlamydia psittaci: 0.03 μg/mL
Mycoplasma pneumoniae: 0.016–2 μg/mL
Streptococcus pneumoniae: 0.06–32 μg/mL
Sclerotherapy
Doxycycline is also used for sclerotherapy in slow-flow vascular malformations, namely venous and lymphatic malformations, as well as post-operative lymphoceles.
Off-label use
Doxycycline has found off-label use in the treatment of transthyretin amyloidosis (ATTR). Together with tauroursodeoxycholic acid, doxycyclin appears to be a promising combination capable of disrupting transthyretine TTR fibrils in existing amyloid deposits of ATTR patients.
Routes of administration
Doxycycline can be administered via oral or intravenous routes.
The combination of doxycycline with dairy, antacids, calcium supplements, iron products, laxatives containing magnesium, or bile acid sequestrants is not inherently dangerous, but any of these foods and supplements may decrease absorption of doxycycline.
Doxycycline has a high oral bioavailability, as it is almost completely absorbed in the stomach and proximal small intestine. Unlike other tetracyclines, its absorption is not significantly affected by food or dairy intake. However, co-administration of dairy products reduces the serum concentration of doxycycline by 20%. Doxycycline absorption is also inhibited by divalent and trivalent cations, such as iron, bismuth, aluminum, calcium and magnesium. Doxycycline forms unstable complexes with metal ions in the acidic gastric environment, which dissociate in the small intestine, allowing the drug to be absorbed. However, some doxycycline remains complexed with metal ions in the duodenum, resulting in a slight decrease in absorption.
Contraindications
Severe liver disease or concomitant use of isotretinoin or other retinoids are contraindications, as both tetracyclines and retinoids can cause intracranial hypertension (increased pressure around the brain) in rare cases.
Pregnancy and lactation
Doxycycline is categorized by the FDA as a class D drug in pregnancy. Doxycycline crosses into breastmilk. Other tetracycline antibiotics are contraindicated in pregnancy and up to eight years of age, due to the potential for disrupting bone and tooth development. They include a class warning about staining of teeth and decreased development of dental enamel in children exposed to tetracyclines in utero, during breastfeeding or during young childhood. However, the FDA has acknowledged that the actual risk of dental staining of primary teeth is undetermined for doxycycline specifically. The best available evidence indicates that doxycycline has little or no effect on hypoplasia of dental enamel or on staining of teeth. The CDC recommends the use of doxycycline for treatment of Q fever and tick-borne rickettsial diseases in young children; others advocate for its use in malaria.
Adverse effects
Adverse effects are similar to those of other members of the tetracycline antibiotic group. Doxycycline can cause gastrointestinal upset. Oral doxycycline can cause pill esophagitis, particularly when it is swallowed without adequate fluid, or by persons with difficulty swallowing or impaired mobility. Doxycycline is less likely than other antibiotic drugs to cause Clostridioides difficile colitis.
An erythematous rash in sun-exposed parts of the body has been reported to occur in 7.3–21.2% of persons taking doxycycline as prophylaxis against malaria. One study examined the tolerability of various malaria prophylactic regimens and found doxycycline did not cause a significantly higher percentage of all skin events (photosensitivity not specified) when compared with other antimalarials. The rash resolves upon discontinuation of the drug.
Unlike some other members of the tetracycline group, it may be used in those with renal impairment.
Doxycycline use has been associated with increased risk of inflammatory bowel disease. In one large retrospective study, patients who were prescribed doxycycline for their acne had a 2.25-fold greater risk of developing Crohn's disease.
Interactions
Previously, doxycycline was believed to impair the effectiveness of many types of hormonal contraception due to CYP450 induction. Research has shown no significant loss of effectiveness in oral contraceptives while using most tetracycline antibiotics (including doxycycline), although many physicians still recommend the use of barrier contraception for people taking the drug to prevent unwanted pregnancy.
Pharmacology
Doxycycline, like other tetracycline antibiotics, is bacteriostatic. It works by preventing bacteria from reproducing by inhibiting protein synthesis.
Doxycycline is highly lipophilic, so it can easily enter cells, meaning the drug is easily absorbed after oral administration and has a large volume of distribution. It can also be re-absorbed in the renal tubules and gastrointestinal tract due to its high lipophilicity, giving it a long elimination half-life. It is also prevented from accumulating in the kidneys of patients with kidney failure due to the compensatory excretion in faeces. Doxycycline–metal ion complexes are unstable at acidic pH, therefore more doxycycline enters the duodenum for absorption than the earlier tetracycline compounds. In addition, food has less effect on the absorption of doxycycline than on the absorption of earlier drugs, with doxycycline serum concentrations being reduced by about 20% by test meals compared with 50% for tetracycline.
Mechanism of action
Doxycycline is a broad-spectrum bacteriostatic antibiotic. It inhibits the synthesis of bacterial proteins by binding to the 30S ribosomal subunit, which is only found in bacteria. This prevents the binding of transfer RNA to messenger RNA at the ribosomal subunit, meaning amino acids cannot be added to polypeptide chains and new proteins cannot be made. This stops bacterial growth, giving the immune system time to kill and remove the bacteria.
Pharmacokinetics
The substance is almost completely absorbed from the upper part of the small intestine. It reaches highest concentrations in the blood plasma after one to two hours and has a high plasma protein binding rate of about 80–90%. Doxycycline penetrates into almost all tissues and body fluids. Very high concentrations are found in the gallbladder, liver, kidneys, lungs, breast milk, bones, and genitals; low concentrations are found in saliva, aqueous humor, cerebrospinal fluid (CSF), and especially in inflamed meninges. By comparison, the tetracycline antibiotic minocycline penetrates significantly better into the CSF and meninges.
Doxycycline metabolism is negligible. It is actively excreted into the gut (in part via the gallbladder, in part directly from blood vessels), where some of it is inactivated by forming chelates. About 40% are eliminated via the kidneys, much less in people with end-stage kidney disease. The biological half-life is 18 to 22 hours (16 ± 6 hours according to another source) in healthy people, slightly longer in those with end-stage kidney disease, and significantly longer in those with liver disease.
Chemistry
Expired tetracyclines or tetracyclines allowed to stand at a pH less than 2 are reported to be nephrotoxic due to the formation of a degradation product, anhydro-4-epitetracycline causing Fanconi syndrome. In the case of doxycycline, the absence of a hydroxyl group in C-6 prevents the formation of the nephrotoxic compound. Nevertheless, tetracyclines and doxycycline itself have to be taken with caution in patients with kidney injury, as they can worsen azotemia due to catabolic effects.
Chemical properties
Doxycycline, doxycycline monohydrate and doxycycline hyclate are yellow, crystalline powders with a bitter taste. The latter smells faintly of ethanol, a 1% aqueous solution has a pH of 2–3, and the specific rotation is −110° cm3/dm·g in 0.01 N methanolic hydrochloric acid.
History
After penicillin revolutionized the treatment of bacterial infections in World War II, many chemical companies moved into the field of discovering antibiotics by bioprospecting. American Cyanamid was one of these, and in the late 1940s chemists there discovered chlortetracycline, the first member of the tetracycline class of antibiotics. Shortly thereafter, scientists at Pfizer discovered oxytetracycline and it was brought to market. Both compounds, like penicillin, were natural products and it was commonly believed that nature had perfected them, and further chemical changes could only degrade their effectiveness. Scientists at Pfizer led by Lloyd Conover modified these compounds, which led to the invention of tetracycline itself, the first semi-synthetic antibiotic. Charlie Stephens' group at Pfizer worked on further analogs and created one with greatly improved stability and pharmacological efficacy: doxycycline. It was clinically developed in the early 1960s and approved by the FDA in 1967.
As its patent grew near to expiring in the early 1970s, the patent became the subject of lawsuit between Pfizer and International Rectifier that was not resolved until 1983; at the time it was the largest litigated patent case in US history. Instead of a cash payment for infringement, Pfizer took the veterinary and feed-additive businesses of International Rectifier's subsidiary, Rachelle Laboratories.
In January 2013, the FDA reported shortages of some, but not all, forms of doxycycline "caused by increased demand and manufacturing issues". Companies involved included an unnamed major generics manufacturer that ceased production in February 2013, Teva (which ceased production in May 2013), Mylan, Actavis, and Hikma Pharmaceuticals. The shortage came at a particularly bad time, since there were also shortages of an alternative antibiotic, tetracycline, at the same time. The market price for doxycycline dramatically increased in the United States in 2013 and early 2014 (from $20 to over $1800 for a bottle of 500 tablets), before decreasing again.
Society and culture
Doxycycline is available worldwide under many brand names. Doxycycline is available as a generic medicine. Doxycycline is also used in the prevention of certain sexually transmitted infections, particularly among men who have sex with men.
Research
Medical conditions
Research areas on the application of doxycycline include the following medical conditions:
macular degeneration;
rheumatoid arthritis instead of minocycline (both of which have demonstrated modest efficacy for this disease).
Dosing
Althoug doxycycline is approved to treat Lyme disease, the optimal dosing and duration of treatment for this condition is a topic of ongoing research. it can be used in adults and children. For treatment or prophylaxis of Lyme disease in children, it can be used for a duration of up to 21 days in children of any age. Doxycycline is specifically indicated to treat Lyme disease for patients presenting with erythema migrans. As for the optimal duration of treatment of this disease, guidelines vary, with some recommending a 10-day course of doxycycline, while others suggest a 14-day course; still, recent data suggest that even a 7-day course of doxycycline can be effective. Compared to other drugs, there are no significant differences in treatment response across antibiotic agents, doses, or durations when comparing 14 days versus 21 days; as such, the optimal duration of treatment of Lyme disease remains uncertain, as prolonged antibiotic courses have drawbacks, including diminishing returns in terms of patient outcomes, heightened risks of adverse events, superinfections, increased healthcare costs, and the potential for development of antibiotic resistance. Therefore, the consensus remains to treat patients with the shortest effective duration of antibiotics, as is the case with doxycycline for Lyme disease as well.
Anti-inflammatory agent
Some studies show doxycycline as a potential agent to possess anti-inflammatory properties acting by inhibiting proinflammatory cytokines such as interleukin-1 (IL-1), interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), and matrix metalloproteinases (MMPs) while increasing the production of anti-inflammatory cytokines such as interleukin-10 (IL-10). Cytokines are small proteins that are secreted by immune cells and play a key role in the immune response. Some studies suggest that doxycycline can suppress the activation of the nuclear factor-kappa B (NF-κB) pathway, which is responsible for upregulating several inflammatory mediators in various cells, including neurons; therefore, it is studied as a potential agent for treating neuroinflammation.
A potential explanation of doxycycline's anti-inflammatory properties is its inhibition of matrix metalloproteinases (MMPs), which are a group of proteases known to regulate the turnover of extracellular matrix (ECM) and thus are suggested to be important in the process of several diseases associated with tissue remodeling and inflammation. Doxycycline has been shown to inhibit MMPs, including matrilysin (MMP7), by interacting with the structural zinc atom and/or calcium atoms within the structural metal center of the protein.
Doxycycline also inhibits allikrein-related peptidase 5 (KLK5). The inhibition of MMPs and KLK5 enzymes subsequently suppresses the expression of LL-37, a cathelicidin antimicrobial peptide that, when overexpressed, can trigger inflammatory cascades. By inhibiting LL-37 expression, doxycycline helps to mitigate these downstream inflammatory cascades, thereby reducing inflammation and the symptoms of inflammatory conditions.
Doxycycline is used to treat acne vulgaris and rosacea. However, there is no clear understanding of what contributes more: the bacteriostatic properties of doxycycline, which affect bacteria (such as Propionibacterium acnes) on the surface of sebaceous glands even in lower doses called "submicrobial" or "subantimicrobial", or whether doxycycline's anti-inflammatory effects, which reduce inflammation in acne vulgaris and rosacea, including ocular rosacea, contribute more to its therapeutic effectiveness against these skin conditions. Subantimicrobial-dose doxycycline (SDD) can still have a bacteriostatic effect, especially when taken for extended periods, such as several months in treating acne and rosacea. While the SDD is believed to have anti-inflammatory effects rather than solely antibacterial effects, SDD was proven to work by reducing inflammation associated with acne and rosacea. Still, the exact mechanisms have yet to be fully discovered. One probable mechanism is doxycycline's ability to decrease the amount of reactive oxygen species (ROS). Inflammation in rosacea may be associated with increased production of ROS by inflammatory cells; these ROS contribute toward exacerbating symptoms. Doxycycline may reduce ROS levels and induce antioxidant activity because it directly scavenges hydroxyl radicals and singlet oxygen, helping minimize tissue damage caused by highly oxidative and inflammatory conditions. Studies have shown that SDD can effectively improve acne and rosacea symptoms, probably without inducing antibiotic resistance. It is observed that doxycycline exerts its anti-inflammatory effects by inhibiting neutrophil chemotaxis and oxidative bursts, which are common mechanisms involved in inflammation and ROS activity in rosacea and acne.
Doxycycline's dual benefits as an antibacterial and anti-inflammatory make it a helpful treatment option for diseases involving inflammation not only of the skin, such as rosacea and acne, but also in conditions such as osteoarthritis or periodontitis. Nevertheless, current results are inconclusive, and evidence of doxycycline's anti-inflammatory properties needs to be improved, considering conflicting reports from animal models so far. Doxycycline has been studied in various immunological disorders, including rheumatoid arthritis, lupus, and periodontitis. In these conditions, doxycycline has been researched to determine anti-inflammatory and immunomodulatory effects that could be beneficial in treating these conditions. However, a solid conclusion still needs to be provided.
Doxycycline is also studied for its neuroprotective properties which are associated with antioxidant, anti-apoptotic, and anti-inflammatory mechanisms. In this context, it is important to note that doxycycline is able to cross the blood–brain barrier. Several studies have shown that doxycycline inhibits dopaminergic neurodegeneration through the upregulation of axonal and synaptic proteins. Axonal degeneration and synaptic loss are key events at the early stages of neurodegeneration and precede neuronal death in neurodegenerative diseases, including Parkinson's disease (PD). Therefore, the regeneration of the axonal and synaptic network might be beneficial in PD. It has been demonstrated that doxycycline mimics nerve growth factor (NGF) signaling in PC12 cells. However, the involvement of this mechanism in the neuroprotective effect of doxycycline is unknown. Doxycycline is also studied in reverting inflammatory changes related to depression. While there is some research on the use of doxycycline for treating major depressive disorder, the results are mixed.
After a large-scale trial showed no benefit of using doxycycline in treating COVID19, the UK's National Institute for Health and Care Excellence (NICE) updated its guidance to not recommend the medication for the treatment of COVID19. Doxycycline was expected to possess anti-inflammatory properties that could lessen the cytokine storm associated with a SARS-CoV-2 infection, but the trials did not demonstrate the expected benefit. Researchers also believed that doxycycline possesses anti-inflammatory and immunomodulatory effects that could reduce the production of cytokines in COVID-19, but these supposed effects failed to improve the outcome of COVID-19 treatment.
Wound healing
Research on novel drug formulations for the delivery of doxycycline in wound treatment is expanding, focusing on overcoming stability limitations for long-term storage and developing consumer-friendly, parenteral antibiotic delivery systems. The most common and practical form of doxycycline delivery is through wound dressings, which have evolved from mono- to three-layered systems to maximize healing effectiveness.
Research directions on the use of doxycycline in wound healing include the continuous stabilization of doxycycline, scaling up technology and industrial production, and exploring non-contact wound treatment methods like sprays and aerosols for use in emergencies and when medical care is not readily accessible.
Research reagent
Doxycycline and other members of the tetracycline class of antibiotics are often used as research reagents in in vitro and in vivo biomedical research experiments involving bacteria as well in experiments in eukaryotic cells and organisms with inducible protein expression systems using tetracycline-controlled transcriptional activation. The mechanism of action for the antibacterial effect of tetracyclines relies on disrupting protein translation in bacteria, thereby damaging the ability of microbes to grow and repair; however protein translation is also disrupted in eukaryotic mitochondria impairing metabolism and leading to effects that can confound experimental results. Doxycycline is also used in "tet-on" (gene expression activated by doxycycline) and "tet-off" (gene expression inactivated by doxycycline) tetracycline-controlled transcriptional activation to regulate transgene expression in organisms and cell cultures. Doxycycline is more stable than tetracycline for this purpose. At subantimicrobial doses, doxycycline is an inhibitor of matrix metalloproteases, and has been used in various experimental systems for this purpose, such as for recalcitrant recurrent corneal erosions.
| Biology and health sciences | Antibiotics | Health |
661025 | https://en.wikipedia.org/wiki/Salvia%20hispanica | Salvia hispanica | Salvia hispanica, one of several related species commonly known as chia (), is a species of flowering plant in the mint family, Lamiaceae. It is native to central and southern Mexico and Guatemala. It is considered a pseudocereal, cultivated for its edible, hydrophilic chia seed, grown and commonly used as food in several countries of western South America, western Mexico, and the southwestern United States.
Description
Chia is an annual herb growing up to tall, with opposite leaves that are long and wide. Its flowers are purple or white and are produced in numerous clusters in a spike at the end of each stem.
Typically, the seeds are small ovals with a diameter around . They are mottle-colored, with brown, gray, black, and white. The seeds are hydrophilic, absorbing up to 12 times their weight in liquid when soaked. While soaking, the seeds develop a mucilaginous coating that gives chia-based beverages a distinctive gelatinous texture.
Many plants cultivated as S.hispanica are in fact S. officinalis subsp. lavandulifolia (syn. S. lavandulifolia).
Etymology
The word chia is derived from the Nahuatl word , meaning 'oily'.
Other plants known as chia include Salvia columbariae, which is sometimes called "golden chia", Salvia polystachia, and Salvia tiliifolia.
Distribution and habitat
Chia is native to central and southern Mexico and Guatemala. It is hardy from USDA Zones 9–12.
Cultivation
Chia is grown and consumed commercially in its native Mexico and Guatemala, as well as Bolivia, Ecuador, Colombia, Nicaragua, northwestern Argentina, parts of Australia, and the southwestern United States. New patented varieties of chia have been bred in Kentucky for cultivation in northern latitudes of the United States.
It is grown commercially for its seed, a food rich in omega-3 fatty acids since the seeds yield 25–30% extractable oil, including α-linolenic acid. Typical composition of the fat of the oil is 55% ω-3, 18% ω-6, 6% ω-9, and 10% saturated fat.
Climate and growing cycle length
The length of the growing cycle for chia varies based on location and is influenced by elevation. For production sites located in different ecosystems in Bolivia, Ecuador and northwestern Argentina, growing cycles are between 100 and 150 days in duration. Accordingly, commercial production fields are located in the range of altitude across a variety of ecosystems ranging from tropical coastal desert, to tropical rain forest, and inter-Andean dry valley. In northwestern Argentina, a time span from planting to harvest of 120–180 days is reported for fields located at elevations of .
S. hispanica is a short-day flowering plant, indicating its photoperiodic sensitivity and lack of photoperiodic variability in traditional cultivars, which has limited commercial use of chia seeds to tropical and subtropical latitudes until 2012. Now, traditional domesticated lines of Salvia species grow naturally or can be cultivated in temperate zones at higher latitudes in the United States. In Arizona and Kentucky, seed maturation of traditional chia cultivars is stopped by frost before or after flower set, preventing seed harvesting. Advances in plant breeding during 2012, however, led to development of new early-flowering chia genotypes proving to have higher yields in Kentucky.
Seed yield and composition
Seed yield varies depending on cultivars, mode of cultivation, and growing conditions by geographic region. For example, commercial fields in Argentina and Colombia vary in yield range from . A small-scale study with three cultivars grown in the inter-Andean valleys of Ecuador produced yields up to , indicating that the favorable growing environment and cultivar interacted to produce the high yields. Genotype has a larger effect on yield than on protein content, oil content, fatty acid composition, or phenolic compounds, whereas high temperature reduces oil content and degree of unsaturation, and raises protein content.
Soil, seedbed requirements, and sowing
The cultivation of S. hispanica requires light to medium clay or sandy soils. The plant prefers well-drained, moderately fertile soils, but can cope with acid soils and moderate drought. Sown chia seeds need moisture for seedling establishment, while the maturing chia plant does not tolerate wet soils during growth.
Traditional cultivation techniques of S. hispanica include soil preparation by disruption and loosening followed by seed broadcasting. In modern commercial production, a typical sowing rate of and row spacing of are usually applied.
Fertilization and irrigation
S. hispanica can be cultivated under low fertilizer input, using nitrogen or in some cases, no fertilizer is used.
Irrigation frequency in chia production fields may vary from none to eight irrigations per growing season, depending on climatic conditions and rainfall.
Genetic diversity and breeding
The wide range of wild and cultivated varieties of S. hispanica are based on seed size, shattering of seeds, and seed color. Seed weight and color have high heritability, with a single recessive gene responsible for white color.
Diseases and crop management
Currently, no major pests or diseases affect chia production. Essential oils in chia leaves have repellent properties against insects, making it suitable for organic cultivation. Virus infections, however, possibly transmitted by white flies, may occur. Weeds may present a problem in the early development of the chia crop until its canopy closes, but because chia is sensitive to most commonly used herbicides, mechanical weed control is preferred.
Other uses
During the 1980s in the United States, the first substantial wave of chia seed sales was tied to Chia Pets. These "pets" come in the form of clay figures that serve as a base for a sticky paste of chia seeds; the figures then are watered and the seeds sprout into a form suggesting a fur covering for the figure. About 500,000 Chia Pets a year are sold in the US as novelties or house plants.
| Biology and health sciences | Pseudocereals | Plants |
661510 | https://en.wikipedia.org/wiki/Clovis%20point | Clovis point | Clovis points are the characteristically fluted projectile points associated with the New World Clovis culture, a prehistoric Paleo-American culture. They are present in dense concentrations across much of North America and they are largely restricted to the north of South America. There are slight differences in points found in the Eastern United States bringing them to sometimes be called "Clovis-like". Clovis points date to the Early Paleoindian period, with all known points dating from roughly 13,400–12,700 years ago (11,500 to 10,800 C14 years BP). As an example, Clovis remains at the Murry Springs Site date to around 12,900 calendar years ago (10,900 ± 50 C14 years BP). Clovis fluted points are named after the city of Clovis, New Mexico, where examples were first found in 1929 by Ridgely Whiteman.
A typical Clovis point is a medium to large lanceolate point with sharp edges, a third of an inch thick, one to two inches wide, and about long. Sides are parallel to convex, and exhibit careful pressure flaking along the blade edge. The broadest area is towards the base which is distinctly concave with concave grooves called "flutes" removed from one or, more commonly, both surfaces of the blade. The lower edges of the blade and base are ground to dull edges for hafting. There is debate about how Clovis points were used. Originally it was assumed that they were used in a thrusting spear. Later suggestions arose that the points had been used as throwing spears, either as is or with spear thrower (atlatl) which technically would be considered darts, or as a braced weapon (pike). It is also possible the points were used in the animal butchering process.
Around 10,000 years before present, a new type of fluted projectile point called Folsom appeared in archaeological deposits, and Clovis-style points disappeared from the continental United States. Most Folsom points are shorter in length than Clovis points and exhibit longer flutes and different pressure flaking patterns. This is particularly easy to see when comparing the unfinished preforms of Clovis and Folsom points. Analysis of radiocarbon dates suggests that the Haskett Projectile Point is contemporary with Clovis and Folsom points.
Type description
Only a few recovered Clovis points are in their original condition. Most points were "reworked" to resharpen them or repair damage. This can make it difficult to identify which lithic tradition they come from.
Clovis type description:
Clovis is a comparatively large and heavy bifacially flaked fluted lanceolate point, lenticular to near oval in cross-section with parallel to moderately convex lateral edges, a majority having the latter.
Maximum width is usually at or slightly below midpoint, frequently resulting in rather long sharp tips.
Bases are normally only slightly concave, the depth usually ranging from and arching completely across basal width.
Basal corners range from nearly square to slightly rounded without forming eared projections.
Length range is considerable, with a majority between .
Maximum width range is , a majority near the former.
Maximum thickness range, .
Normally fluted on both faces.
Flutes are most often produced by multiple flake removals
Length and quality of flutes is greatly variable, with length usually 30% to 50% of overall point length, and the majority near the former
Base of flutes is often widened by subsequent removals of additional channel flakes or short wide flakes.
There is minimal post-fluting retouch of basal areas.
Overall flaking frequently irregular in both size and orientation, often including large facet remnants of early stage reduction processes
There is very moderate evidence of pressure flaking
Lower lateral and basal edges are smoothed by grinding, often resulting in slight tapering of base.
Clovis points do not have recurved (fishtail) lateral edges, pronounced basal constrictions, or convex (Folsom-type) channel flake platform remnants.
Points generally weigh between roughly 25 grams and 35 grams.Specimens are known to have been made of flint, chert, jasper, chalcedony and other stone of conchoidal fracture. Clovis points can vary even at a single site. The eight points found at Naco, while otherwise similar, ranged in length from 2 to 4 inches. A study suggested that Clovis points east of the Mississippi river had more diversity/richness that those in the west.
Distribution
Clovis points have been found over most of North America and, less commonly, as far south as Venezuela. One issue is that the sea level is now about 50 meters higher than in the Paleoindian period so any coastal sites would be underwater, which may be skewing the data. The widespread South American Fishtail or Fell projectile point style has been suggested to have derived from Clovis. Of the around 6000 points currently classified as Clovis found in the United States the majority were east of the Mississippi and especially in the Southeast. Some researchers suggest that many of the eastern points are misclassified and most real Clovis Points are found in the west. Significant Clovis find sites include:
Anzick site in Montana
Aubrey site in Texas
Belson site
Big Eddy Site in Missouri
Blackwater Draw type site in New Mexico
Colby site in Wyoming
Dent site in Colorado
Domebo Canyon in Oklahoma
East Wenatchee Clovis Site in Washington
El Fin del Mundo in Sonora, Mexico
Gault site in Texas
Page–Ladson in Florida
Lehner Mammoth-Kill Site in Arizona
Murray Springs Clovis Site in Arizona
Naco Mammoth Kill Site in Arizona
Paleo Crossing site in Ohio
Ready site (aka Lincoln Hills site) in Illinois
Shawnee-Minisink Site in Pennsylvania
Simon site in Idaho
Sloth Hole in Florida
Fraudulent Clovis points have also emerged on the open market, some with false documentation.
Caches
Clovis points, along with other stone and bone/ivory tools, have been identified in over two dozen artifact caches. These caches range from the Mississippi River to the Rocky Mountains and Northwest United States. While the Anzick cache is associated with a child burial, the majority of caches appear to represent anticipatory material storage at strategic locations on the Pleistocene landscape. In May 2008, a major Clovis cache, now called the Mahaffey Cache, was found in Boulder, Colorado, with 83 Clovis stone tools though no actual Clovis Points. The tools were found to have traces of horse and cameloid protein. They were dated to 13,000 to 13,500 YBP, a date confirmed by sediment layers in which the tools were found and the types of protein residues found on the artifacts. The Fenn cache is an important collection of 56 items of uncertain provenance but that was probably discovered in 1902 "near the area where Utah, Wyoming, and Idaho meet" and was acquired by Forrest Fenn in 1988.
There is current debate on whether "assemblages", production debris typically found in Clovis sites (blade cores, large bifacial overface flakes, etc.) but without actual projectile points, actually date to the Clovis period or to later periods.
Origins
Whether Clovis toolmaking technology was developed in the Americas in response to megafauna hunting or originated through influences from elsewhere is an open question among archaeologists. Lithic antecedents of Clovis points have not been found in northeast Asia, from where the first human inhabitants of the Americas originated in the current consensus of archaeology. Some archaeologists have argued that similarities between points produced by the Solutrean culture in the Iberian Peninsula of Europe suggest that the technology was introduced by hunters traversing the Atlantic ice-shelf and suggests that some of the first American humans were European (the Solutrean hypothesis). However, this hypothesis is not well-accepted as other archaeologists have pointed out that Solutrean and Clovis lithic technologies are technologically distinct (e.g. a lack of distinctive flutes in Solutrean technology), there is no genetic evidence for European ancestry in Indigenous North Americans, and the proposed Solutrean migration route was likely unsuitable.
| Technology | Hand tools | null |
661808 | https://en.wikipedia.org/wiki/Bravais%20lattice | Bravais lattice | In geometry and crystallography, a Bravais lattice, named after , is an infinite array of discrete points generated by a set of discrete translation operations described in three dimensional space by
where the ni are any integers, and ai are primitive translation vectors, or primitive vectors, which lie in different directions (not necessarily mutually perpendicular) and span the lattice. The choice of primitive vectors for a given Bravais lattice is not unique. A fundamental aspect of any Bravais lattice is that, for any choice of direction, the lattice appears exactly the same from each of the discrete lattice points when looking in that chosen direction.
The Bravais lattice concept is used to formally define a crystalline arrangement and its (finite) frontiers. A crystal is made up of one or more atoms, called the basis or motif, at each lattice point. The basis may consist of atoms, molecules, or polymer strings of solid matter, and the lattice provides the locations of the basis.
Two Bravais lattices are often considered equivalent if they have isomorphic symmetry groups. In this sense, there are 5 possible Bravais lattices in 2-dimensional space and 14 possible Bravais lattices in 3-dimensional space. The 14 possible symmetry groups of Bravais lattices are 14 of the 230 space groups. In the context of the space group classification, the Bravais lattices are also called Bravais classes, Bravais arithmetic classes, or Bravais flocks.
Unit cell
In crystallography, there is the concept of a unit cell which comprises the space between adjacent lattice points as well as any atoms in that space. A unit cell is defined as a space that, when translated through a subset of all vectors described by , fills the lattice space without overlapping or voids. (I.e., a lattice space is a multiple of a unit cell.)
There are mainly two types of unit cells: primitive unit cells and conventional unit cells. A primitive cell is the very smallest component of a lattice (or crystal) which, when stacked together with lattice translation operations, reproduces the whole lattice (or crystal). Note that the translations must be lattice translation operations that cause the lattice to appear unchanged after the translation. If arbitrary translations were allowed, one could make a primitive cell half the size of the true one, and translate twice as often, as an example.
Another way of defining the size of a primitive cell that avoids invoking lattice translation operations, is to say that the primitive cell is the smallest possible component of a lattice (or crystal) that can be repeated to reproduce the whole lattice (or crystal), and that contains exactly one lattice point. In either definition, the primitive cell is characterized by its small size.
There are clearly many choices of cell that can reproduce the whole lattice when stacked (two lattice halves, for instance), and the minimum size requirement distinguishes the primitive cell from all these other valid repeating units. If the lattice or crystal is 2-dimensional, the primitive cell has a minimum area; likewise in 3 dimensions the primitive cell has a minimum volume.
Despite this rigid minimum-size requirement, there is not one unique choice of primitive unit cell. In fact, all cells whose borders are primitive translation vectors will be primitive unit cells. The fact that there is not a unique choice of primitive translation vectors for a given lattice leads to the multiplicity of possible primitive unit cells. Conventional unit cells, on the other hand, are not necessarily minimum-size cells. They are chosen purely for convenience and are often used for illustration purposes. They are loosely defined.
Primitive unit cell
Primitive unit cells are defined as unit cells with the smallest volume for a given crystal. (A crystal is a lattice and a basis at every lattice point.) To have the smallest cell volume, a primitive unit cell must contain (1) only one lattice point and (2) the minimum amount of basis constituents (e.g., the minimum number of atoms in a basis). For the former requirement, counting the number of lattice points in a unit cell is such that, if a lattice point is shared by m adjacent unit cells around that lattice point, then the point is counted as 1/m. The latter requirement is necessary since there are crystals that can be described by more than one combination of a lattice and a basis. For example, a crystal, viewed as a lattice with a single kind of atom located at every lattice point (the simplest basis form), may also be viewed as a lattice with a basis of two atoms. In this case, a primitive unit cell is a unit cell having only one lattice point in the first way of describing the crystal in order to ensure the smallest unit cell volume.
There can be more than one way to choose a primitive cell for a given crystal and each choice will have a different primitive cell shape, but the primitive cell volume is the same for every choice and each choice will have the property that a one-to-one correspondence can be established between primitive unit cells and discrete lattice points over the associated lattice. All primitive unit cells with different shapes for a given crystal have the same volume by definition; For a given crystal, if n is the density of lattice points in a lattice ensuring the minimum amount of basis constituents and v is the volume of a chosen primitive cell, then nv = 1 resulting in v = 1/n, so every primitive cell has the same volume of 1/n.
Among all possible primitive cells for a given crystal, an obvious primitive cell may be the parallelepiped formed by a chosen set of primitive translation vectors. (Again, these vectors must make a lattice with the minimum amount of basis constituents.) That is, the set of all points where and is the chosen primitive vector. This primitive cell does not always show the clear symmetry of a given crystal. In this case, a conventional unit cell easily displaying the crystal symmetry is often used. The conventional unit cell volume will be an integer-multiple of the primitive unit cell volume.
Origin of concept
In two dimensions, any lattice can be specified by the length of its two primitive translation vectors and the angle between them. There are an infinite number of possible lattices one can describe in this way. Some way to categorize different types of lattices is desired. One way to do so is to recognize that some lattices have inherent symmetry. One can impose conditions on the length of the primitive translation vectors and on the angle between them to produce various symmetric lattices. These symmetries themselves are categorized into different types, such as point groups (which includes mirror symmetries, inversion symmetries and rotation symmetries) and translational symmetries. Thus, lattices can be categorized based on what point group or translational symmetry applies to them.
In two dimensions, the most basic point group corresponds to rotational invariance under 2π and π, or 1- and 2-fold rotational symmetry. This actually applies automatically to all 2D lattices, and is the most general point group. Lattices contained in this group (technically all lattices, but conventionally all lattices that don't fall into any of the other point groups) are called oblique lattices. From there, there are 4 further combinations of point groups with translational elements (or equivalently, 4 types of restriction on the lengths/angles of the primitive translation vectors) that correspond to the 4 remaining lattice categories: square, hexagonal, rectangular, and centered rectangular. Thus altogether there are 5 Bravais lattices in 2 dimensions.
Likewise, in 3 dimensions, there are 14 Bravais lattices: 1 general "wastebasket" category (triclinic) and 13 more categories. These 14 lattice types are classified by their point groups into 7 lattice systems (triclinic, monoclinic, orthorhombic, tetragonal, cubic, rhombohedral, and hexagonal).
In 2 dimensions
In two-dimensional space there are 5 Bravais lattices, grouped into four lattice systems, shown in the table below. Below each diagram is the Pearson symbol for that Bravais lattice.
Note: In the unit cell diagrams in the following table the lattice points are depicted using black circles and the unit cells are depicted using parallelograms (which may be squares or rectangles) outlined in black. Although each of the four corners of each parallelogram connects to a lattice point, only one of the four lattice points technically belongs to a given unit cell and each of the other three lattice points belongs to one of the adjacent unit cells. This can be seen by imagining moving the unit cell parallelogram slightly left and slightly down while leaving all the black circles of the lattice points fixed.
The unit cells are specified according to the relative lengths of the cell edges (a and b) and the angle between them (θ). The area of the unit cell can be calculated by evaluating the norm , where a and b are the lattice vectors. The properties of the lattice systems are given below:
In 3 dimensions
In three-dimensional space there are 14 Bravais lattices. These are obtained by combining one of the seven lattice systems with one of the centering types. The centering types identify the locations of the lattice points in the unit cell as follows:
Primitive (P): lattice points on the cell corners only (sometimes called simple)
Base-centered (S: A, B, or C): lattice points on the cell corners with one additional point at the center of each face of one pair of parallel faces of the cell (sometimes called end-centered)
Body-centered (I): lattice points on the cell corners, with one additional point at the center of the cell
Face-centered (F): lattice points on the cell corners, with one additional point at the center of each of the faces of the cell
Not all combinations of lattice systems and centering types are needed to describe all of the possible lattices, as it can be shown that several of these are in fact equivalent to each other. For example, the monoclinic I lattice can be described by a monoclinic C lattice by different choice of crystal axes. Similarly, all A- or B-centred lattices can be described either by a C- or P-centering. This reduces the number of combinations to 14 conventional Bravais lattices, shown in the table below. Below each diagram is the Pearson symbol for that Bravais lattice.
Note: In the unit cell diagrams in the following table all the lattice points on the cell boundary (corners and faces) are shown; however, not all of these lattice points technically belong to the given unit cell. This can be seen by imagining moving the unit cell slightly in the negative direction of each axis while keeping the lattice points fixed. Roughly speaking, this can be thought of as moving the unit cell slightly left, slightly down, and slightly out of the screen. This shows that only one of the eight corner lattice points (specifically the front, left, bottom one) belongs to the given unit cell (the other seven lattice points belong to adjacent unit cells). In addition, only one of the two lattice points shown on the top and bottom face in the Base-centered column belongs to the given unit cell. Finally, only three of the six lattice points on the faces in the Face-centered column belong to the given unit cell.
The unit cells are specified according to six lattice parameters which are the relative lengths of the cell edges (a, b, c) and the angles between them (α, β, γ), where α is the angle between b and c, β is the angle between a and c, and γ is the angle between a and b. The volume of the unit cell can be calculated by evaluating the triple product , where a, b, and c are the lattice vectors. The properties of the lattice systems are given below:
Some basic information for the lattice systems and Bravais lattices in three dimensions is summarized in the diagram at the beginning of this page. The seven sided polygon (heptagon) and the number 7 at the centre indicate the seven lattice systems. The inner heptagons indicate the lattice angles, lattice parameters, Bravais lattices and Schöenflies notations for the respective lattice systems.
In 4 dimensions
In four dimensions, there are 64 Bravais lattices. Of these, 23 are primitive and 41 are centered. Ten Bravais lattices split into enantiomorphic pairs.
| Physical sciences | Crystallography | Physics |
13862505 | https://en.wikipedia.org/wiki/Whippletree%20%28mechanism%29 | Whippletree (mechanism) | A whippletree, or whiffletree, is a mechanism to distribute force evenly through linkages. It is also referred to as an equalizer, leader bar, or double tree. It consists of a bar pivoted at or near the centre, with force applied from one direction to the pivot and from the other direction to the tips. Several whippletrees may be used in series to distribute the force further, such as to simulate pressure over an area as when applying loading to test airplane wings. Whippletrees may be used either in compression or tension. They were also used for subtraction and addition calculations in mechanical computers. Tension whippletrees are used in artful hung mobiles, such as those by artist Alexander Calder.
Draught animals
Whippletrees are used in tension to distribute forces from a point load to the traces of draught animals (the traces are the chains or straps on each side of the harness, on which the animal pulls). For these, the whippletree consists of a loose horizontal bar between the draught animal and its load. The centre of the bar is connected to the load, and the traces attach to its ends.
Whippletrees are used especially when pulling a dragged load such as a plough, harrow, log or canal boat or for pulling a vehicle (by the leaders in a team with more than one row of animals).
A swingletree, or singletree, is a special kind of whippletree used for a horse-drawn vehicle. The term swingletree is sometimes used for draught whippletrees.
A whippletree balances the pull from each side of the animal, preventing the load from tugging alternately on each side. It also keeps a point load from pulling the traces in onto the sides of the animal.
If several animals are used abreast, further whippletrees may be used behind the first. Thus, with two animals, each has its own whippletree, and a further one balances the loads from their two whippletrees—an arrangement sometimes known as a double-tree, or for the leaders in a larger team, leader-bars. With three or more animals abreast, even more whippletrees are needed; some may be made asymmetrical to balance odd numbers of animals. Multiple whippletrees balance the pulls from the different animals, ensuring that each takes an equal share of the work.
Other examples
Whippletrees are also used in modern agriculture—for example, to link several ganged agricultural implements such as harrows, mowers or rollers to a tractor. This combines several small loads into a single load at the tractor hitch (the reverse of the use for draught animals).
A series of whippletrees is used in compression in a standard windshield wiper to distribute the point force of the sprung wiper arm evenly along the wiper blade.
Some designs for large telescopes use whippletrees to support the optical elements. The tree provides distributed mechanical support, reducing localised mechanical deflections, which in turn reduces optical distortion. Unlike the applications described above, which are two-dimensional, the whippletrees in telescope mirror support cells are three-dimensional designs, since the tree must support multiple points over an area.
Linkage-type mechanical analog computers use whippletree linkages to add and subtract quantities represented by straight-line motions. The illustration here of whippletrees for a three-animal team is very similar to a group of linkage adders and subtracters: "load" is the equivalent of the output sum/difference of the individual inputs. Inside the computer, cylinders on the knob shafts have thin metal tapes wrapped around them to convert rotary to linear motion.
One widely used application was in the IBM Selectric typewriter (and the IBM 2741 derived from it), where the linkages summed binary mechanical inputs to rotate and tilt the type ball. This type of computing method was also used for naval gunnery, such as the MK 56 Gun Fire Control System and sonar fire-control systems.
| Technology | Mechanisms | null |
13865571 | https://en.wikipedia.org/wiki/Disease%20vector | Disease vector | In epidemiology, a disease vector is any living agent that carries and transmits an infectious pathogen such as a parasite or microbe, to another living organism. Agents regarded as vectors are mostly blood-sucking insects such as mosquitoes. The first major discovery of a disease vector came from Ronald Ross in 1897, who discovered the malaria pathogen when he dissected the stomach tissue of a mosquito.
Arthropods
Arthropods form a major group of pathogen vectors with mosquitoes, flies, sand flies, lice, fleas, ticks, and mites transmitting a huge number of pathogens. Many such vectors are haematophagous, which feed on blood at some or all stages of their lives. When the insects feed on blood, the pathogen enters the blood stream of the host. This can happen in different ways.
The Anopheles mosquito, a vector for malaria, filariasis, and various arthropod-borne-viruses (arboviruses), inserts its delicate mouthpart under the skin and feeds on its host's blood. The parasites the mosquito carries are usually located in its salivary glands (used by mosquitoes to anaesthetise the host). Therefore, the parasites are transmitted directly into the host's blood stream. Pool feeders such as the sand fly and black fly, vectors for pathogens causing leishmaniasis and onchocerciasis respectively, will chew a well in the host's skin, forming a small pool of blood from which they feed. Leishmania parasites then infect the host through the saliva of the sand fly. Onchocerca force their own way out of the insect's head into the pool of blood.
Triatomine bugs are responsible for the transmission of a trypanosome, Trypanosoma cruzi, which causes Chagas disease. The Triatomine bugs defecate during feeding and the excrement contains the parasites, which are accidentally smeared into the open wound by the host responding to pain and irritation from the bite.
There are several species of Thrips that act as vectors for over 20 viruses, especially Tospoviruses, and cause all sorts of plant diseases.
Plants and fungi
Some plants and fungi act as vectors for various pathogens. For example, the big-vein disease of lettuce was long thought to be caused by a member of the fungal division Chytridiomycota, namely Olpidium brassicae. Eventually, however, the disease was shown to be viral. Later it transpired that the virus was transmitted by the zoospores of the fungus and also survived in the resting spores. Since then, many other fungi in Chytridiomycota have been shown to vector plant viruses.
Many plant pests that seriously damage important crops depend on other plants, often weeds, to harbour or vector them; the distinction is not always clear. In the case of Puccinia graminis for example, Berberis and related genera act as alternate hosts in a cycle of infection of grain.
More directly, when they twine from one plant to another, parasitic plants such as Cuscuta and Cassytha have been shown to convey phytoplasmal and viral diseases between plants.
Mammals
Rabies is transmitted through exposure to the saliva or brain tissue of an infected animal. Any warm-blooded animal can carry rabies, but the most common vectors are dogs, skunks, raccoons, and bats.
Vector-borne zoonotic disease and human activity
Several articles, recent to early 2014, warn that human activities are spreading vector-borne zoonotic diseases. Several articles were published in the medical journal The Lancet, and discuss how rapid changes in land use, trade globalization, climate change and "social upheaval" are causing a resurgence in zoonotic disease across the world.
Examples of vector-borne zoonotic diseases include:
Lyme disease
Plague
West Nile virus
Many factors affect the incidence of vector-borne diseases. These factors include animals hosting the disease, vectors, and people.
Humans can also be vectors for some diseases, such as Tobacco mosaic virus, physically transmitting the virus with their hands from plant to plant.
Control and prevention
The World Health Organization (WHO) states that control and prevention of vector-borne diseases are emphasizing "Integrated Vector Management (IVM)", which is an approach that looks at the links between health and environment, optimizing benefits to both.
In April 2014, WHO launched a campaign called "Small bite, big threat" to educate people about vector-borne illnesses. WHO issued reports indicating that vector-borne illnesses affect poor people, especially people living in areas that do not have adequate levels of sanitation, drinking water and housing. It is estimated that over 80% of the world's population resides in areas under threat of at least one vector borne disease.
| Biology and health sciences | Concepts | Health |
2339955 | https://en.wikipedia.org/wiki/Xenon%20tetrafluoride | Xenon tetrafluoride | Xenon tetrafluoride is a chemical compound with chemical formula . It was the first discovered binary compound of a noble gas. It is produced by the chemical reaction of xenon with fluorine:
Xe + 2 →
This reaction is exothermic, releasing an energy of 251 kJ/mol.
Xenon tetrafluoride is a colorless crystalline solid that sublimes at 117 °C. Its structure was determined by both NMR spectroscopy and X-ray crystallography in 1963. The structure is square planar, as has been confirmed by neutron diffraction studies. According to VSEPR theory, in addition to four fluoride ligands, the xenon center has two lone pairs of electrons. These lone pairs are mutually trans.
Synthesis
The original synthesis of xenon tetrafluoride occurred through direct 1:5-molar-ratio combination of the elements in a nickel (Monel) vessel at 400 °C. The nickel does not catalyze the reaction, but rather protects the container surfaces against fluoride corrosion. Controlling the process against impurities is difficult, as xenon difluoride (), tetrafluoride, and hexafluoride () are all in chemical equilibrium, the difluoride favored at low temperatures little fluorine and the hexafluoride favored at high temperatures and excess fluorine. Fractional sublimation (xenon tetrafluoride is particularly involatile) or other equilibria generally allow purification of the product mixture.
The elements combine more selectively when γ- or UV-irradiated in a nickel container or dissolved in anhydrous hydrogen fluoride with catalytic oxygen. That reaction is believed selective because dioxygen difluoride at standard conditions is too weak an oxidant to generate xenon(VI) species.
Alternatively, fluoroxenonium perfluorometallate salts pyrolyze to XeF4.
Reactions
Xenon tetrafluoride hydrolyzes at low temperatures to form elemental xenon, oxygen, hydrofluoric acid, and aqueous xenon trioxide:
It is used as a precursor for synthesis of all tetravalent Xe compounds. Reaction with tetramethylammonium fluoride gives tetramethylammonium pentafluoroxenate, which contains the pentagonal anion. The anion is also formed by reaction with cesium fluoride:
CsF + →
Reaction with bismuth pentafluoride () forms the cation:
+ → XeF3BiF6
The cation in the salt XeF3Sb2F11 has been characterized by NMR spectroscopy.
At 400 °C, reacts with xenon to form :
XeF4 + Xe → 2 XeF2
The reaction of xenon tetrafluoride with platinum yields platinum tetrafluoride and xenon:
XeF4 + Pt → PtF4 + Xe
Applications
Xenon tetrafluoride has few applications. It has been shown to degrade silicone rubber for analyzing trace metal impurities in the rubber. reacts with the silicone to form simple gaseous products, leaving a residue of metal impurities.
| Physical sciences | Noble gas compounds | Chemistry |
2340343 | https://en.wikipedia.org/wiki/Makemake | Makemake | Makemake (minor-planet designation: 136472 Makemake) is a dwarf planet and the largest of what is known as the classical population of Kuiper belt objects, with a diameter approximately that of Saturn's moon Iapetus, or 60% that of Pluto. It has one known satellite. Its extremely low average temperature, about , means its surface is covered with methane, ethane, and possibly nitrogen ices. Makemake shows signs of geothermal activity and thus may be capable of supporting active geology and harboring an active subsurface ocean.
Makemake was discovered on March 31, 2005 by a team led by Michael E. Brown, and announced on July 29, 2005. It was initially known as and later given the minor-planet number 136472. In July 2008, it was named after Makemake, a creator god in the Rapa Nui mythology of Easter Island, under the expectation by the International Astronomical Union (IAU) that it would prove to be a dwarf planet.
History
Discovery
Makemake was discovered on March 31, 2005, by a team at the Palomar Observatory, led by Michael E. Brown, and was announced to the public on July 29, 2005. The team had planned to delay announcing their discoveries of the bright objects Makemake and until further observations and calculations were complete, but announced them both on July 29 when the discovery of another large object they had been tracking, , was controversially announced on July 27 by a different team in Spain.
The earliest known precovery observations of Makemake have been found in photographic plates of the Palomar Observatory's Digitized Sky Survey from January 29, 1955 to May 1, 1998.
Despite its relative brightness (a fifth as bright as Pluto), Makemake was not discovered until after many much fainter Kuiper belt objects. Most searches for minor planets are conducted relatively close to the ecliptic (the region of the sky that the Sun, Moon, and planets appear to lie in, as seen from Earth), due to the greater likelihood of finding objects there. It probably escaped detection during the earlier surveys due to its relatively high orbital inclination, and the fact that it was at its farthest distance from the ecliptic at the time of its discovery, in the northern constellation of Coma Berenices.
Makemake is the brightest trans-Neptunian object after Pluto, with an apparent magnitude of 16.2 in late 1930, it is theoretically bright enough to have been discovered by Clyde Tombaugh, whose search for trans-Neptunian objects was sensitive to objects up to magnitude 17. Indeed, in 1934 Tombaugh reported that there were no other planets out to a magnitude of 16.5 and an inclination of 17 degrees, or of greater inclination but within 50 degrees of either node.
And Makemake was there: At the time of Tombaugh's survey (1930–1943), Makemake varied from 5.5 to 13.2 degrees from the ecliptic, moving across Auriga, starting near the northwest corner of Taurus and cutting across a corner of Gemini. The starting position, however, was very close to the galactic anticenter, and Makemake would have been almost impossible to find against the dense background of stars. Tombaugh continued searching for thirteen years after his discovery of Pluto (and Makemake, though growing dimmer, was still magnitude 16.6 in early 1943, the last year of his search), but by then he was searching higher latitudes and did not find any more objects orbiting beyond Neptune.
Name and symbol
The provisional designation was given to Makemake when the discovery was made public. Before that, the discovery team used the codename "Easterbunny" for the object, because of its discovery shortly after Easter.
In July 2008, in accordance with IAU rules for classical Kuiper belt objects, was given the name of a creator deity. The name of Makemake, the creator of humanity and god of fertility in the myths of the Rapa Nui, the native people of Easter Island, was chosen in part to preserve the object's connection with Easter.
Planetary symbols are no longer much used in astronomy. A Makemake symbol is included in Unicode as U+1F77C: it is mostly used by astrologers, but has also been used by NASA. The symbol was designed by Denis Moskowitz and John T. Whelan; it is a traditional petroglyph of Makemake's face stylized to resemble an 'M'. The commercial Solar Fire astrology software uses an alternative symbol (), a crossed variant of a symbol () created by astrologer Henry Seltzer for his commercial software.
Orbit and classification
, Makemake was from the Sun, almost as far from the Sun as it ever reaches on its orbit. Makemake follows an orbit very similar to that of : highly inclined at 29° and a moderate eccentricity of about 0.16. But still, Makemake's orbit is slightly farther from the Sun in terms of both the semi-major axis and perihelion. Its orbital period is 306 years, more than Pluto's 248 years and Haumea's 283 years. Both Makemake and Haumea are currently far from the ecliptic (at an angular distance of almost 29°). Makemake will reach its aphelion in 2033, whereas Haumea passed its aphelion in early 1992.
Makemake is a classical Kuiper belt object (KBO), which means its orbit lies far enough from Neptune to remain stable over the age of the Solar System. Unlike plutinos, which can cross Neptune's orbit due to their 2:3 resonance with the planet, the classical objects have perihelia further from the Sun, free from Neptune's perturbation. Such objects have relatively low eccentricities (e below 0.2) and orbit the Sun in much the same way the planets do. Makemake, however, is a member of the "dynamically hot" class of classical KBOs, meaning that it has a high inclination compared to others in its population. Makemake is, probably coincidentally, near the 13:7 resonance with Neptune.
Physical characteristics
Brightness, size, and rotation
Makemake is currently visually the second-brightest Kuiper belt object after Pluto, having a March opposition apparent magnitude of 17.0 it will pass from its present constellation Coma Berenices to Boötes in November 2028. It is bright enough to be visible using a high-end amateur telescope.
Combining the detection in infrared by the Spitzer Space Telescope and Herschel Space Telescope with the similarities of Pluto's spectrum yielded an estimated diameter from 1,360 to 1,480 km. From the 2011 stellar occultation by Makemake, its dimensions had initially been measured at . However, the occultation data was later reanalyzed, leading to an estimate of without a pole-orientation constraint. Makemake was the fourth dwarf planet recognized, because it has a bright V-band absolute magnitude of 0.05. Makemake has a highly reflective surface with a geometrical albedo of .
The rotation period of Makemake is estimated at 22.83 hours. A rotation period of 7.77 hours published in 2009 later turned out to be an alias of the actual rotation period. The possibility of this had been mentioned in the 2009 study, and the data from that study agrees well with the 22.83-hour period. This rotation period is relatively long for a dwarf planet. Part of this may be due to tidal acceleration from Makemake's satellite. It has been suggested that a second large, undiscovered satellite might better explain the dwarf planet's unusually long rotation.
Makemake's lightcurve amplitude is small, only 0.03 mag. This was thought to be due to Makemake currently being viewed pole on from Earth; however, S/2015 (136472) 1's orbital plane (which is probably orbiting with little inclination relative to Makemake's equator due to tidal effects) is edge-on from Earth, implying that Makemake is being viewed equator-on.
Spectra and surface
Like Pluto, Makemake appears red in the visible spectrum, and significantly redder than the surface of Eris (see colour comparison of TNOs). The near-infrared spectrum is marked by the presence of the broad methane (CH4) absorption bands. Methane is observed also on Pluto and Eris, but its spectral signature is much weaker.
Spectral analysis of Makemake's surface revealed that methane must be present in the form of large grains at least one centimetre in size. Large amounts of ethane and tholins, as well as smaller amounts of ethylene, acetylene, and high-mass alkanes (like propane), may be present, most likely created by photolysis of methane by solar radiation. The tholins are probably responsible for the red color of the visible spectrum. Although evidence exists for the presence of nitrogen ice on its surface, at least mixed with other ices, there is nowhere near the same level of nitrogen as on Pluto and Triton, where it composes more than 98 percent of the crust. The relative lack of nitrogen ice suggests that its supply of nitrogen has somehow been depleted over the age of the Solar System.
The far-infrared (24–70 μm) and submillimeter (70–500 μm) photometry performed by Spitzer and Herschel telescopes revealed that the surface of Makemake is not homogeneous. Although the majority of it is covered by nitrogen and methane ices, where the albedo ranges from 78 to 90%, there are small patches of dark terrain whose albedo is only 2 to 12%, and that make up 3 to 7% of the surface. These studies were made before S/2015 (136472) 1 was discovered; thus, these small dark patches may have instead been the dark surface of the satellite rather than any actual surface features on Makemake.
However, some experiments have refuted these studies. Spectroscopic studies, collected from 2005 to 2008 using the William Herschel Telescope (La Palma, Spain) were analyzed together with other spectra in the literature, as of 2014. They show some degree of variation in the spectral slope, which would be associated with different abundance of the complex organic materials, byproducts of the irradiation of the ices present on the surface of Makemake. However, the relative ratio of the two dominant icy species, methane, and nitrogen, remains quite stable on the surface revealing a low degree of inhomogeneity in the ice component. These results were recently confirmed when the Telescopio Nazionale Galileo acquired new visible and near infra-red spectra for Makemake, between 2006 and 2013, that covered nearly 80% of its surface; this study found that the variations in the spectra were negligible, suggesting that Makemake's surface may indeed be homogenous. Based on optical observations conducted between 2006 and 2017, Hromakina et al. concluded that Makemake's lightcurve was likely due to heterogeneities across its surface, but that the variations (of the order of 3%) were too small to have been detected spectroscopically.
More research shows that Eris, Pluto and Makemake show signs of noticeable geothermal activity and could likely harbor active subsurface oceans. Rebuking the earlier speculations about distant celestial objects being uninhabitable.
Atmosphere
Makemake was expected to have an atmosphere similar to that of Pluto but with a lower surface pressure. However, on 23 April 2011, Makemake passed in front of an 18th-magnitude star and abruptly blocked its light. The results showed that Makemake presently lacks a substantial atmosphere and placed an upper limit of 0.4–1.2 millipascals on the pressure at its surface.
The presence of methane and possibly nitrogen suggests that Makemake could have a transient atmosphere similar to that of Pluto near its perihelion. Nitrogen, if present, will be the dominant component of it. The existence of an atmosphere also provides a natural explanation for the nitrogen depletion: because the gravity of Makemake is weaker than that of Pluto, Eris and Triton, a large amount of nitrogen was probably lost via atmospheric escape; methane is lighter than nitrogen, but has significantly lower vapor pressure at temperatures prevalent at the surface of Makemake (32–36 K), which hinders its escape; the result of this process is a higher relative abundance of methane. However, studies of Pluto's atmosphere by New Horizons suggest that methane, not nitrogen, is the dominant escaping gas, suggesting that the reasons for Makemake's absence of nitrogen may be more complicated.
Satellite
Makemake has a single discovered moon, S/2015 (136472) 1 and nicknamed MK2. It was seen 21,000 km (13,000 mi) from the dwarf planet, and its diameter is estimated at (for an assumed albedo of 4%).
Exploration
Makemake was observed from afar by the New Horizons spacecraft in October 2007 and January 2017, from distances of 52 AU and 70 AU, respectively. The spacecraft's outbound trajectory permitted observations of Makemake at high phase angles that are otherwise unobtainable from Earth, enabling the determination of the light scattering properties and phase curve behavior of Makemake's surface.
It has been calculated that a flyby mission to Makemake could take just over 16 years using a Jupiter gravity assist, based on a launch date of 24 August 2036. Makemake would be approximately 52 AU from the Sun when the spacecraft arrives.
| Physical sciences | Solar System | Astronomy |
2343883 | https://en.wikipedia.org/wiki/Depression%20%28geology%29 | Depression (geology) | In geology, a depression is a landform sunken or depressed below the surrounding area. Depressions form by various mechanisms.
Types
Erosion-related:
Blowout: a depression created by wind erosion typically in either a partially vegetated sand dune ecosystem or dry soils (such as a post-glacial loess environment).
Glacial valley: a depression carved by erosion by a glacier.
River valley: a depression carved by fluvial erosion by a river.
Area of subsidence caused by the collapse of an underlying structure, such as sinkholes in karst terrain.
Sink: an endorheic depression generally containing a persistent or intermittent (seasonal) lake, a salt flat (playa) or dry lake, or an ephemeral lake.
Panhole: a shallow depression or basin eroded into flat or gently sloping, cohesive rock.
Collapse-related:
Sinkhole: a depression formed as a result of the collapse of rocks lying above a hollow. This is common in karst regions.
Kettle: a shallow, sediment-filled body of water formed by melting glacial remnants in terminal moraines.
Thermokarst hollow: caused by volume loss of the ground as the result of permafrost thawing.
Impact-related:
Impact crater: a depression created by an impact, such as a meteorite crater.
Sedimentary-related:
Sedimentary basin: in sedimentology, an area thickly filled with sediment in which the weight of the sediment further depresses the floor of the basin.
Structural or tectonic-related:
Structural basin: a syncline-like depression; a region of tectonic downwarping as a result of isostasy (the Hawaiian Trough is an example) or subduction (such as the Chilean Central Valley).
Graben or rift valley: fallen and typically linear depressions or basins created by rifting in a region under tensional tectonic forces.
Pull-apart basin caused by offset in a strike-slip or transform fault (example: the Dead Sea area).
Oceanic trench: a deep linear depression on the ocean floor. Oceanic trenches are caused by subduction (when one tectonic plate is pushed underneath another) of oceanic crust beneath either the oceanic crust or continental crust.
A basin formed by an ice sheet: an area depressed by the weight of the ice sheet resulting in post-glacial rebound after the ice melts (the area adjacent to the ice sheet may be pulled down to create a peripheral depression.)
Volcanism-related:
Caldera: a volcanic depression resulting from collapse following a volcanic eruption.
Pit crater: a volcanic depression smaller than a caldera formed by a sinking, or caving in, of the ground surface lying over a void.
Maar: a depression resulting from phreatomagmatic eruption or diatreme explosion.
List of depressions
Aral–Caspian Depression
Baetic Depression
Bodélé Depression
Caspian Depression
Danakil Depression
Eider-Treene Depression
Georgia Depression
Giurgeu-Brașov Depression
Godzareh Depression
Huancabamba Depression
Kara Depression
Karashor Depression
Kuma–Manych Depression
Kuznetsk Depression
Mari Depression
Mourdi Depression
Qattara Depression
Regen Depression
Ronda Depression
Táchira Depression
Tunkin Depression
Turan Depression
Turpan Depression
Tuva Depression
Upemba Depression
Weser Depression
Wittlich Depression
Wümme Depression
| Physical sciences | Landforms: General | Earth science |
9853321 | https://en.wikipedia.org/wiki/Interstitial%20defect | Interstitial defect | In materials science, an interstitial defect is a type of point crystallographic defect where an atom of the same or of a different type, occupies an interstitial site in the crystal structure. When the atom is of the same type as those already present they are known as a self-interstitial defect. Alternatively, small atoms in some crystals may occupy interstitial sites, such as hydrogen in palladium. Interstitials can be produced by bombarding a crystal with elementary particles having energy above the displacement threshold for that crystal, but they may also exist in small concentrations in thermodynamic equilibrium. The presence of interstitial defects can modify the physical and chemical properties of a material.
History
The idea of interstitial compounds was started in the late 1930s and they are often called Hagg phases after Hägg. Transition metals generally crystallise in either the hexagonal close packed or face centered cubic structures, both of which can be considered to be made up of layers of hexagonally close packed atoms. In both of these very similar lattices there are two sorts of interstice, or hole:
Two tetrahedral holes per metal atom, i.e. the hole is between four metal atoms
One octahedral hole per metal atom, i.e. the hole is between six metal atoms
It was suggested by early workers that:
the metal lattice was relatively unaffected by the interstitial atom
the electrical conductivity was comparable to that of the pure metal
there was a range of composition
the type of interstice occupied was determined by the size of the atom
These were not viewed as compounds, but rather as solutions, of say carbon, in the metal lattice, with a limiting upper “concentration” of the smaller atom that was determined by the number of interstices available.
Current
A more detailed knowledge of the structures of metals, and binary and ternary phases of metals and non metals shows that:
generally at low concentrations of the small atom, the phase can be described as a solution, and this approximates to the historical description of an interstitial compound above.
at higher concentrations of the small atom, phases with different lattice structures may be present, and these may have a range of stoichiometries.
One example is the solubility of carbon in iron. The form of pure iron stable between 910 °C and 1390 °C, γ-iron, forms a solid solution with carbon termed austenite which is also known as steel.
Self-interstitials
Self-interstitial defects are interstitial defects which contain only atoms which are the same as those already present in the lattice.
The structure of interstitial defects has been experimentally determined in some metals and semiconductors.
Contrary to what one might intuitively expect, most self-interstitials in metals with a known structure have a 'split' structure, in which two atoms share the same lattice site. Typically the center of mass of the two atoms is at the lattice site, and they are displaced symmetrically from it along one of the principal lattice directions. For instance, in several common face-centered cubic (fcc) metals such as copper, nickel and platinum, the ground state structure of the self-interstitial is the split [100] interstitial structure, where two atoms are displaced in a positive and negative [100] direction from the lattice site. In body-centered cubic (bcc) iron the ground state interstitial structure is similarly a [110] split interstitial.
These split interstitials are often called dumbbell interstitials, because plotting the two atoms forming the interstitial with two large spheres and a thick line joining them makes the structure resemble a dumbbell weight-lifting device.
In other bcc metals than iron, the ground state structure is believed based on recent density-functional theory calculations to be the [111] crowdion interstitial, which can be understood as a long chain (typically some 10–20) of atoms along the [111] lattice direction, compressed compared to the perfect lattice such that the chain contains one extra atom.
In semiconductors the situation is more complex, since defects may be charged and different charge states may have different structures. For instance, in silicon, the interstitial may either have a split [110] structure or a tetrahedral truly interstitial one.
Carbon, notably in graphite and diamond, has a number of interesting self-interstitials - recently discovered using Local-density approximation-calculations is the "spiro-interestitial" in graphite, named after spiropentane, as the interstitial carbon atom is situated between two basal planes and bonded in a geometry similar to spiropentane.
Impurity interstitials
Small impurity interstitial atoms are usually on true interstitial sites between the lattice atoms. Large impurity interstitials can also be in split interstitial configurations together with a lattice atom, similar to those of the self-interstitial atom.
Effects of interstitials
Interstitials modify the physical and chemical properties of materials.
Interstitial carbon atoms have a crucial role for the properties and processing of steels, in particular carbon steels.
Impurity interstitials can be used e.g. for storage of hydrogen in metals.
The crystal lattice can expand with the concentration of impurity interstitials
The amorphization of semiconductors such as silicon during ion irradiation is often explained by the build up of a high concentration of interstitials leading eventually to the collapse of the lattice as it becomes unstable.
Creation of large amounts of interstitials in a solid can lead to a significant energy buildup, which on release can even lead to severe accidents in certain old types of nuclear reactors (Wigner effect). The high-energy states can be released by annealing.
At least in fcc lattice, interstitials have a large diaelastic softening effect on the material.
It has been proposed that interstitials are related to the onset of melting and the glass transition.
| Physical sciences | Alloys and ceramic compounds | Chemistry |
9133841 | https://en.wikipedia.org/wiki/Smooth-coated%20otter | Smooth-coated otter | The smooth-coated otter (Lutrogale perspicillata) is a freshwater otter species from regions of South and Southwest Asia, with the majority of its numbers found in Southeast Asia. It has been ranked as "vulnerable" on the IUCN Red List since 1996, as it is threatened by habitat loss, pollution of wetlands and poaching for the illegal wildlife trade. As the common name indicates, its fur is relatively smooth, and somewhat shorter in length than that of other otter species.
Characteristics
The smooth-coated otter has a short, sleek coat of dark-brown to reddish-brown fur along its back, with lighter grayish brown on its underside. It is distinguished from other otter species by a more "rounded" head, and by having a vaguely diamond-shaped, hairless nose. The tail is flattened, in contrast to the more rounded or cylindrical tails of other otters. The legs are short and strong, with large, webbed feet bearing strong and sharp claws for handling slippery fish. The smooth-coated otter is a relatively large otter species, weighing from and measuring around in head-body length with a long tail. Females have two pairs of teats with which they nurse small litters of several young.
Taxonomy
Lutra perspicillata was the scientific name proposed by Isidore Geoffroy Saint-Hilaire in 1826 for a "brown" otter collected in Sumatra. Lutrogale was proposed as the generic name by John Edward Gray in 1865 for otters with a more convex forehead and nose, using perspicillata as the type species. By the 19th and 20th centuries, several early zoological specimens were described, including:
Lutrogale perspicillata sindica — proposed by Reginald Innes Pocock in 1940, when seven pale skins of smooth-coated otters were collected and documented in Sukkur and Khairpur Districts of Sindh Province, Pakistan.
Lutrogale perspicillata maxwelli — proposed by Robert William Hayman in 1957, and named after British naturalist and author Gavin Maxwell, after a dark-brown, adult male smooth-coated otter was collected on the banks of the Tigris River, Iraq.
The smooth-coated otter is the only living species in the monotypic genus Lutrogale. Three regional subspecies are currently recognised:
L. p. perspicillata — found across India, Nepal, southwestern Yunnan, most of mainland Indochina and Southeast Asia, as well as on Sumatra and Java, Indonesia.
L. p. sindica — found in Khyber Pakhtunkhwa, Punjab and Sindh Provinces, Pakistan.
L. p. maxwelli — found in Iraq, primarily near the Tigris River.
The smooth-coated otter, together with the Asian small-clawed otter and the African clawless otter, form a sister clade to the genus Lutra. The smooth-coated otter and the Asian small-clawed otter genetically diverged about 1.33 ± 0.78 million years ago. Hybridisation of smooth-coated otter males with Asian small-clawed otter females has occurred in Singapore. The resulting offspring and their descendants then bred back into the smooth-coated otter population, but maintained the genes of their small-clawed otter ancestors. Today, an urban population of at least 60 hybrid otters exists in Singapore.
Distribution and habitat
The smooth-coated otter is distributed in Pakistan, India, Nepal, Bhutan, Bangladesh, southern China, Myanmar, Thailand, Vietnam, Peninsular Malaysia, Singapore, and on Borneo, Sumatra and Java. An isolated population lives in the marshes of Iraq.
It has often been recorded in saltwater near the coast, especially on smaller islands, but requires a nearby source of freshwater.
It inhabits areas where fresh water is plentiful such as wetlands, seasonal swamps, rivers, lakes and rice paddies. Where it is the only occurring otter species, it lives in almost any suitable habitat. But where it is sympatric with other otter species, it avoids smaller streams and canals in favour of larger water bodies.
Smooth-coated otter groups studied in the Moyar River preferred rocky areas near fast flowing river segments with loose sand and little vegetation cover.
The population in the Mesopotamian Marshes was feared to have perished, but otter tracks were found in 2009, suggesting the population may have survived. Skins of smooth-coated otters were found during surveys between 2005 and 2012 in the vicinity of Hammar and Hawizeh Marshes. Tracks and scat found in Erbil Province were also thought to have been left by smooth-coated otters.
In Gujarat, smooth-coated otters were documented near lakes, canals and mangroves in the outskirts of Surat in 2015. In Singapore, smooth-coated otters have adapted well to urban environments, and have been observed to use urban structures like gaps under buildings as alternatives for holts. They also use staircases and ladders to get in and out of concrete canals with vertical or near‐vertical banks.
This population is well-protected and steadily increasing, with some families, such as the Bishan otter family, becoming a common sight and attracting media attention.
Behaviour and ecology
The smooth-coated otter lives in groups of up to 11 individuals. They rest on sandy riverbanks and establish their dens under tree roots or among boulders. Observations in Peninsular Malaysia indicate that they are active foremost during the day, with a short rest during midday. They mark their playground by urinating and sprainting on rocks or vegetation.
They communicate through vocalisations such as whistles, chirps, and wails.
Diet
Smooth-coated otters were observed to forage on river banks among tree trunks.
They feed mainly on fish including Trichogaster, climbing gourami and catfish. During the rice planting season, they also hunt rats in rice fields. Snakes, amphibians and insects constitute a small portion of their diet. Especially in areas where they share habitat with other otter species, they prefer larger fish, typically between in length.
In Kuala Selangor Nature Park, an otter group was observed hunting. They formed an undulating, slightly V-shaped line, pointing in the direction of movement and nearly as wide as the creek. The largest individuals occupied the middle section. In this formation, they undulated wildly through the creek, causing panic‑stricken fish to jump out of the water a few metres ahead. They suddenly dived and grasped the fish with their snouts. Then they moved ashore, tossed the fish up a little on the muddy part of the bank, and swallowed it head‑first in one piece.
Reproduction
Smooth-coated otters form small family groups of a mated pair with up to four offspring from previous seasons.
Copulation occurs in water and lasts less than one minute.
As long as the food supply is sufficient, they breed throughout the year, but where they depend on monsoon precipitation, they breed between October and February. The largest recorded wild-born litter of seven pups was observed in Singapore in November 2017.
Pups are born after a gestation period of 60 to 63 days, with a usual litter size of up to five pups. The mothers give birth to and raise their young in a burrow near water. They either construct such a burrow themselves, or they take over an abandoned one. At birth, the pups are blind and helpless, but their eyes open after 10 days. They are weaned at about three to five months and reach adult size at about one year of age, and sexual maturity at two or three years.
Threats
The smooth-coated otter is threatened by poaching, loss and destruction of wetlands, as these are converted for settlements, agriculture and hydroelectric projects; water courses are being polluted by pesticides such as chlorinated hydrocarbons and organophosphates. These factors lead to a reduced prey base. Otters are indiscriminately killed especially at aquaculture sites. Trapping of otters is prevalent in India, Nepal, and Bangladesh.
Along the Chambal River in India, smooth-coated otters are most vulnerable during winter when they rear young. During this season, they are disturbed by humans harvesting crops and removing wood along rocky stretches of the river.
Six juvenile smooth-coated otters were discovered in a bag left at Bangkok International Airport in January 2013. This was the first case of smooth-coated otters thought to have been destined for the illegal pet trade.
At least seven smooth-coated otters were offered for sale through websites by traders in Thailand and Malaysia between 2016 and 2017.
Conservation
The smooth-coated otter is a protected species in most range countries and listed globally as a vulnerable species. It had been listed on CITES Appendix II since 1977. Since August 2019, it is included in CITES Appendix I, thus strengthening its protection in regards to international trade.
Cultural significance
In southern Bangladesh, smooth-coated otters are used for commercial fishing. They are bred in captivity and trained to chase fish into fishing nets. By 2011, this fishing technique was used by about 300 fishermen, with an additional 2,000 people indirectly dependent on the technique for their livelihood.
| Biology and health sciences | Mustelidae | Animals |
9134092 | https://en.wikipedia.org/wiki/Geodynamics | Geodynamics | Geodynamics is a subfield of geophysics dealing with dynamics of the Earth. It applies physics, chemistry and mathematics to the understanding of how mantle convection leads to plate tectonics and geologic phenomena such as seafloor spreading, mountain building, volcanoes, earthquakes, faulting. It also attempts to probe the internal activity by measuring magnetic fields, gravity, and seismic waves, as well as the mineralogy of rocks and their isotopic composition. Methods of geodynamics are also applied to exploration of other planets.
Overview
Geodynamics is generally concerned with processes that move materials throughout the Earth. In the Earth's interior, movement happens when rocks melt or deform and flow in response to a stress field. This deformation may be brittle, elastic, or plastic, depending on the magnitude of the stress and the material's physical properties, especially the stress relaxation time scale. Rocks are structurally and compositionally heterogeneous and are subjected to variable stresses, so it is common to see different types of deformation in close spatial and temporal proximity. When working with geological timescales and lengths, it is convenient to use the continuous medium approximation and equilibrium stress fields to consider the average response to average stress.
Experts in geodynamics commonly use data from geodetic GPS, InSAR, and seismology, along with numerical models, to study the evolution of the Earth's lithosphere, mantle and core.
Work performed by geodynamicists may include:
Modeling brittle and ductile deformation of geologic materials
Predicting patterns of continental accretion and breakup of continents and supercontinents
Observing surface deformation and relaxation due to ice sheets and post-glacial rebound, and making related conjectures about the viscosity of the mantle
Finding and understanding the driving mechanisms behind plate tectonics.
Deformation of rocks
Rocks and other geological materials experience strain according to three distinct modes, elastic, plastic, and brittle depending on the properties of the material and the magnitude of the stress field. Stress is defined as the average force per unit area exerted on each part of the rock. Pressure is the part of stress that changes the volume of a solid; shear stress changes the shape. If there is no shear, the fluid is in hydrostatic equilibrium. Since, over long periods, rocks readily deform under pressure, the Earth is in hydrostatic equilibrium to a good approximation. The pressure on rock depends only on the weight of the rock above, and this depends on gravity and the density of the rock. In a body like the Moon, the density is almost constant, so a pressure profile is readily calculated. In the Earth, the compression of rocks with depth is significant, and an equation of state is needed to calculate changes in density of rock even when it is of uniform composition.
Elastic
Elastic deformation is always reversible, which means that if the stress field associated with elastic deformation is removed, the material will return to its previous state. Materials only behave elastically when the relative arrangement along the axis being considered of material components (e.g. atoms or crystals) remains unchanged. This means that the magnitude of the stress cannot exceed the yield strength of a material, and the time scale of the stress cannot approach the relaxation time of the material. If stress exceeds the yield strength of a material, bonds begin to break (and reform), which can lead to ductile or brittle deformation.
Ductile
Ductile or plastic deformation happens when the temperature of a system is high enough so that a significant fraction of the material microstates (figure 1) are unbound, which means that a large fraction of the chemical bonds are in the process of being broken and reformed. During ductile deformation, this process of atomic rearrangement redistributes stress and strain towards equilibrium faster than they can accumulate. Examples include bending of the lithosphere under volcanic islands or sedimentary basins, and bending at oceanic trenches. Ductile deformation happens when transport processes such as diffusion and advection that rely on chemical bonds to be broken and reformed redistribute strain about as fast as it accumulates.
Brittle
When strain localizes faster than these relaxation processes can redistribute it, brittle deformation occurs. The mechanism for brittle deformation involves a positive feedback between the accumulation or propagation of defects especially those produced by strain in areas of high strain, and the localization of strain along these dislocations and fractures. In other words, any fracture, however small, tends to focus strain at its leading edge, which causes the fracture to extend.
In general, the mode of deformation is controlled not only by the amount of stress, but also by the distribution of strain and strain associated features. Whichever mode of deformation ultimately occurs is the result of a competition between processes that tend to localize strain, such as fracture propagation, and relaxational processes, such as annealing, that tend to delocalize strain.
Deformation structures
Structural geologists study the results of deformation, using observations of rock, especially the mode and geometry of deformation to reconstruct the stress field that affected the rock over time. Structural geology is an important complement to geodynamics because it provides the most direct source of data about the movements of the Earth. Different modes of deformation result in distinct geological structures, e.g. brittle fracture in rocks or ductile folding.
Thermodynamics
The physical characteristics of rocks that control the rate and mode of strain, such as yield strength or viscosity, depend on the thermodynamic state of the rock and composition. The most important thermodynamic variables in this case are temperature and pressure. Both of these increase with depth, so to a first approximation the mode of deformation can be understood in terms of depth. Within the upper lithosphere, brittle deformation is common because under low pressure rocks have relatively low brittle strength, while at the same time low temperature reduces the likelihood of ductile flow. After the brittle-ductile transition zone, ductile deformation becomes dominant. Elastic deformation happens when the time scale of stress is shorter than the relaxation time for the material. Seismic waves are a common example of this type of deformation. At temperatures high enough to melt rocks, the ductile shear strength approaches zero, which is why shear mode elastic deformation (S-Waves) will not propagate through melts.
Forces
The main motive force behind stress in the Earth is provided by thermal energy from radioisotope decay, friction, and residual heat. Cooling at the surface and heat production within the Earth create a metastable thermal gradient from the hot core to the relatively cool lithosphere. This thermal energy is converted into mechanical energy by thermal expansion. Deeper and hotter rocks often have higher thermal expansion and lower density relative to overlying rocks. Conversely, rock that is cooled at the surface can become less buoyant than the rock below it. Eventually this can lead to a Rayleigh-Taylor instability (Figure 2), or interpenetration of rock on different sides of the buoyancy contrast.
Negative thermal buoyancy of the oceanic plates is the primary cause of subduction and plate tectonics, while positive thermal buoyancy may lead to mantle plumes, which could explain intraplate volcanism. The relative importance of heat production vs. heat loss for buoyant convection throughout the whole Earth remains uncertain and understanding the details of buoyant convection is a key focus of geodynamics.
Methods
Geodynamics is a broad field which combines observations from many different types of geological study into a broad picture of the dynamics of Earth. Close to the surface of the Earth, data includes field observations, geodesy, radiometric dating, petrology, mineralogy, drilling boreholes and remote sensing techniques. However, beyond a few kilometers depth, most of these kinds of observations become impractical. Geologists studying the geodynamics of the mantle and core must rely entirely on remote sensing, especially seismology, and experimentally recreating the conditions found in the Earth in high pressure high temperature experiments.(see also Adams–Williamson equation).
Numerical modeling
Because of the complexity of geological systems, computer modeling is used to test theoretical predictions about geodynamics using data from these sources.
There are two main ways of geodynamic numerical modeling.
Modelling to reproduce a specific observation: This approach aims to answer what causes a specific state of a particular system.
Modelling to produce basic fluid dynamics: This approach aims to answer how a specific system works in general.
Basic fluid dynamics modelling can further be subdivided into instantaneous studies, which aim to reproduce the instantaneous flow in a system due to a given buoyancy distribution, and time-dependent studies, which either aim to reproduce a possible evolution of a given initial condition over time or a statistical (quasi) steady-state of a given system.
| Physical sciences | Geophysics | Earth science |
4387132 | https://en.wikipedia.org/wiki/Gravity%20of%20Earth | Gravity of Earth | The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation).
It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm .
In SI units, this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ).
The precise strength of Earth's gravity varies with location. The agreed-upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value).
The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects.
Variation in magnitude
A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid. There are consequently slight deviations in the magnitude of gravity across its surface.
Gravity on the Earth's surface varies by around 0.7%, from 9.7639 m/s2 on the Nevado Huascarán mountain in Peru to 9.8337 m/s2 at the surface of the Arctic Ocean. In large cities, it ranges from 9.7806 m/s2 in Kuala Lumpur, Mexico City, and Singapore to 9.825 m/s2 in Oslo and Helsinki.
Conventional value
In 1901, the third General Conference on Weights and Measures defined a standard gravitational acceleration for the surface of the Earth: gn = 9.80665 m/s2. It was based on measurements at the Pavillon de Breteuil near Paris in 1888, with a theoretical correction applied in order to convert to a latitude of 45° at sea level. This definition is thus not a value of any particular place or carefully worked out average, but an agreement for a value to use if a better actual local value is not known or not important. It is also used to define the units kilogram force and pound force.
Latitude
The surface of the Earth is rotating, so it is not an inertial frame of reference. At latitudes nearer the Equator, the outward centrifugal force produced by Earth's rotation is larger than at polar latitudes. This counteracts the Earth's gravity to a small degree – up to a maximum of 0.3% at the Equator – and reduces the apparent downward acceleration of falling objects.
The second major reason for the difference in gravity at different latitudes is that the Earth's equatorial bulge (itself also caused by centrifugal force from rotation) causes objects at the Equator to be further from the planet's center than objects at the poles. The force due to gravitational attraction between two masses (a piece of the Earth and the object being weighed) varies inversely with the square of the distance between them. The distribution of mass is also different below someone on the equator and below someone at a pole. The net result is that an object at the Equator experiences a weaker gravitational pull than an object on one of the poles.
In combination, the equatorial bulge and the effects of the surface centrifugal force due to rotation mean that sea-level gravity increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles, so an object will weigh approximately 0.5% more at the poles than at the Equator.
Altitude
Gravity decreases with altitude as one rises above the Earth's surface because greater altitude means greater distance from the Earth's centre. All other things being equal, an increase in altitude from sea level to causes a weight decrease of about 0.29%. (An additional factor affecting apparent weight is the decrease in air density at altitude, which lessens an object's buoyancy. This would increase a person's apparent weight at an altitude of 9,000 metres by about 0.08%)
It is a common misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earth's gravity. In fact, at an altitude of , equivalent to a typical orbit of the ISS, gravity is still nearly 90% as strong as at the Earth's surface. Weightlessness actually occurs because orbiting objects are in free-fall.
The effect of ground elevation depends on the density of the ground (see Slab correction section). A person flying at above sea level over mountains will feel more gravity than someone at the same elevation but over the sea. However, a person standing on the Earth's surface feels less gravity when the elevation is higher.
The following formula approximates the Earth's gravity variation with altitude:
where
is the gravitational acceleration at height above sea level.
is the Earth's mean radius.
is the standard gravitational acceleration.
The formula treats the Earth as a perfect sphere with a radially symmetric distribution of mass; a more accurate mathematical treatment is discussed below.
Depth
An approximate value for gravity at a distance from the center of the Earth can be obtained by assuming that the Earth's density is spherically symmetric. The gravity depends only on the mass inside the sphere of radius . All the contributions from outside cancel out as a consequence of the inverse-square law of gravitation. Another consequence is that the gravity is the same as if all the mass were concentrated at the center. Thus, the gravitational acceleration at this radius is
where is the gravitational constant and is the total mass enclosed within radius . If the Earth had a constant density , the mass would be and the dependence of gravity on depth would be
The gravity at depth is given by where is acceleration due to gravity on the surface of the Earth, is depth and is the radius of the Earth.
If the density decreased linearly with increasing radius from a density at the center to at the surface, then , and the dependence would be
The actual depth dependence of density and gravity, inferred from seismic travel times (see Adams–Williamson equation), is shown in the graphs below.
Local topography and geology
Local differences in topography (such as the presence of mountains), geology (such as the density of rocks in the vicinity), and deeper tectonic structure cause local and regional differences in the Earth's gravitational field, known as gravitational anomalies. Some of these anomalies can be very extensive, resulting in bulges in sea level, and throwing pendulum clocks out of synchronisation.
The study of these anomalies forms the basis of gravitational geophysics. The fluctuations are measured with highly sensitive gravimeters, the effect of topography and other known factors is subtracted, and from the resulting data conclusions are drawn. Such techniques are now used by prospectors to find oil and mineral deposits. Denser rocks (often containing mineral ores) cause higher than normal local gravitational fields on the Earth's surface. Less dense sedimentary rocks cause the opposite.
There is a strong correlation between the gravity derivation map of earth from NASA GRACE with positions of recent volcanic activity, ridge spreading and volcanos: these regions have a stronger gravitation than theoretical predictions.
Other factors
In air or water, objects experience a supporting buoyancy force which reduces the apparent strength of gravity (as measured by an object's weight). The magnitude of the effect depends on the air density (and hence air pressure) or the water density respectively; see Apparent weight for details.
The gravitational effects of the Moon and the Sun (also the cause of the tides) have a very small effect on the apparent strength of Earth's gravity, depending on their relative positions; typical variations are 2 μm/s2 (0.2 mGal) over the course of a day.
Direction
Gravity acceleration is a vector quantity, with direction in addition to magnitude. In a spherically symmetric Earth, gravity would point directly towards the sphere's centre. As the Earth's figure is slightly flatter, there are consequently significant deviations in the direction of gravity: essentially the difference between geodetic latitude and geocentric latitude. Smaller deviations, called vertical deflection, are caused by local mass anomalies, such as mountains.
Comparative values worldwide
Tools exist for calculating the strength of gravity at various cities around the world. The effect of latitude can be clearly seen with gravity in high-latitude cities: Anchorage (9.826 m/s2), Helsinki (9.825 m/s2), being about 0.5% greater than that in cities near the equator: Kuala Lumpur (9.776 m/s2). The effect of altitude can be seen in Mexico City (9.776 m/s2; altitude ), and by comparing Denver (9.798 m/s2; ) with Washington, D.C. (9.801 m/s2; ), both of which are near 39° N. Measured values can be obtained from Physical and Mathematical Tables by T.M. Yarwood and F. Castle, Macmillan, revised edition 1970.
Mathematical models
If the terrain is at sea level, we can estimate, for the Geodetic Reference System 1980, , the acceleration at latitude :
This is the International Gravity Formula 1967, the 1967 Geodetic Reference System Formula, Helmert's equation or Clairaut's formula.
An alternative formula for g as a function of latitude is the WGS (World Geodetic System) 84 Ellipsoidal Gravity Formula:
where
are the equatorial and polar semi-axes, respectively;
is the spheroid's eccentricity, squared;
is the defined gravity at the equator and poles, respectively;
(formula constant);
then, where ,
where the semi-axes of the earth are:
The difference between the WGS-84 formula and Helmert's equation is less than 0.68 μm·s−2.
Further reductions are applied to obtain gravity anomalies (see: Gravity anomaly#Computation).
Estimating g from the law of universal gravitation
From the law of universal gravitation, the force on a body acted upon by Earth's gravitational force is given by
where r is the distance between the centre of the Earth and the body (see below), and here we take to be the mass of the Earth and m to be the mass of the body.
Additionally, Newton's second law, F = ma, where m is mass and a is acceleration, here tells us that
Comparing the two formulas it is seen that:
So, to find the acceleration due to gravity at sea level, substitute the values of the gravitational constant, G, the Earth's mass (in kilograms), m1, and the Earth's radius (in metres), r, to obtain the value of g:
This formula only works because of the mathematical fact that the gravity of a uniform spherical body, as measured on or above its surface, is the same as if all its mass were concentrated at a point at its centre. This is what allows us to use the Earth's radius for r.
The value obtained agrees approximately with the measured value of g. The difference may be attributed to several factors, mentioned above under "Variation in magnitude":
The Earth is not homogeneous
The Earth is not a perfect sphere, and an average value must be used for its radius
This calculated value of g only includes true gravity. It does not include the reduction of constraint force that we perceive as a reduction of gravity due to the rotation of Earth, and some of gravity being counteracted by centrifugal force.
There are significant uncertainties in the values of r and m1 as used in this calculation, and the value of G is also rather difficult to measure precisely.
If G, g and r are known then a reverse calculation will give an estimate of the mass of the Earth. This method was used by Henry Cavendish.
Measurement
The measurement of Earth's gravity is called gravimetry.
Satellite measurements
| Physical sciences | Geophysics | Earth science |
4387406 | https://en.wikipedia.org/wiki/Equations%20for%20a%20falling%20body | Equations for a falling body | A set of equations describing the trajectories of objects subject to a constant gravitational force under normal Earth-bound conditions. Assuming constant acceleration g due to Earth's gravity, Newton's law of universal gravitation simplifies to F = mg, where F is the force exerted on a mass m by the Earth's gravitational field of strength g. Assuming constant g is reasonable for objects falling to Earth over the relatively short vertical distances of our everyday experience, but is not valid for greater distances involved in calculating more distant effects, such as spacecraft trajectories.
History
Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of water.
The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. The effect of air resistance varies enormously depending on the size and geometry of the falling object—for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.)
The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures.
Overview
Near the surface of the Earth, the acceleration due to gravity = 9.807 m/s2 (metres per second squared, which might be thought of as "metres per second, per second"; or 32.18 ft/s2 as "feet per second per second") approximately. A coherent set of units for , , and is essential. Assuming SI units, is measured in metres per second squared, so must be measured in metres, in seconds and in metres per second.
In all cases, the body is assumed to start from rest, and air resistance is neglected. Generally, in Earth's atmosphere, all results below will therefore be quite inaccurate after only 5 seconds of fall (at which time an object's velocity will be a little less than the vacuum value of 49 m/s (9.8 m/s2 × 5 s) due to air resistance). Air resistance induces a drag force on any body that falls through any atmosphere other than a perfect vacuum, and this drag force increases with velocity until it equals the gravitational force, leaving the object to fall at a constant terminal velocity.
Terminal velocity depends on atmospheric drag, the coefficient of drag for the object, the (instantaneous) velocity of the object, and the area presented to the airflow.
Apart from the last formula, these formulas also assume that negligibly varies with height during the fall (that is, they assume constant acceleration). The last equation is more accurate where significant changes in fractional distance from the centre of the planet during the fall cause significant changes in . This equation occurs in many applications of basic physics.
The following equations start from the general equations of linear motion:
and equation for universal gravitation (r+d= distance of object above the ground from the center of mass of planet):
Equations
Example
The first equation shows that, after one second, an object will have fallen a distance of 1/2 × 9.8 × 12 = 4.9 m. After two seconds it will have fallen 1/2 × 9.8 × 22 = 19.6 m; and so on. On the other hand, the penultimate equation becomes grossly inaccurate at great distances. If an object fell 10000 m to Earth, then the results of both equations differ by only 0.08%; however, if it fell from geosynchronous orbit, which is 42164 km, then the difference changes to almost 64%.
Based on wind resistance, for example, the terminal velocity of a skydiver in a belly-to-earth (i.e., face down) free-fall position is about 195 km/h (122 mph or 54 m/s). This velocity is the asymptotic limiting value of the acceleration process, because the effective forces on the body balance each other more and more closely as the terminal velocity is approached. In this example, a speed of 50% of terminal velocity is reached after only about 3 seconds, while it takes 8 seconds to reach 90%, 15 seconds to reach 99% and so on.
Higher speeds can be attained if the skydiver pulls in his or her limbs (see also freeflying). In this case, the terminal velocity increases to about 320 km/h (200 mph or 90 m/s), which is almost the terminal velocity of the peregrine falcon diving down on its prey. The same terminal velocity is reached for a typical .30-06 bullet dropping downwards—when it is returning to earth having been fired upwards, or dropped from a tower—according to a 1920 U.S. Army Ordnance study.
For astronomical bodies other than Earth, and for short distances of fall at other than "ground" level, in the above equations may be replaced by where is the gravitational constant, is the mass of the astronomical body, is the mass of the falling body, and is the radius from the falling object to the center of the astronomical body.
Removing the simplifying assumption of uniform gravitational acceleration provides more accurate results. We find from the formula for radial elliptic trajectories:
The time taken for an object to fall from a height to a height , measured from the centers of the two bodies, is given by:
where is the sum of the standard gravitational parameters of the two bodies. This equation should be used whenever there is a significant difference in the gravitational acceleration during the fall.
Note that when this equation gives , as expected; and when it gives , which is the time to collision.
Acceleration relative to the rotating Earth
Centripetal force causes the acceleration measured on the rotating surface of the Earth to differ from the acceleration that is measured for a free-falling body: the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north–south axis of the Earth, corresponding to staying stationary in that frame of reference.
| Physical sciences | Classical mechanics | Physics |
385846 | https://en.wikipedia.org/wiki/Kaon | Kaon | In particle physics, a kaon, also called a K meson and denoted , is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark).
Kaons have proved to be a copious source of information on the nature of fundamental interactions since their discovery by George Rochester and Clifford Butler at the Department of Physics and Astronomy, University of Manchester in cosmic rays in 1947. They were essential in establishing the foundations of the Standard Model of particle physics, such as the quark model of hadrons and the theory of quark mixing (the latter was acknowledged by a Nobel Prize in Physics in 2008). Kaons have played a distinguished role in our understanding of fundamental conservation laws: CP violation, a phenomenon generating the observed matter–antimatter asymmetry of the universe, was discovered in the kaon system in 1964 (which was acknowledged by a Nobel Prize in 1980). Moreover, direct CP violation was discovered in the kaon decays in the early 2000s by the NA48 experiment at CERN and the KTeV experiment at Fermilab.
Basic properties
The four kaons are:
, negatively charged (containing a strange quark and an up antiquark) has mass and mean lifetime .
(antiparticle of above) positively charged (containing an up quark and a strange antiquark) must (by CPT invariance) have mass and lifetime equal to that of . Experimentally, the mass difference is , consistent with zero; the difference in lifetimes is , also consistent with zero.
, neutrally charged (containing a down quark and a strange antiquark) has mass . It has mean squared charge radius of .
, neutrally charged (antiparticle of above) (containing a strange quark and a down antiquark) has the same mass.
As the quark model shows, assignments that the kaons form two doublets of isospin; that is, they belong to the fundamental representation of SU(2) called the 2. One doublet of strangeness +1 contains the and the . The antiparticles form the other doublet (of strangeness −1).
[*] See | Physical sciences | Bosons | Physics |
386596 | https://en.wikipedia.org/wiki/Mesa | Mesa | A mesa is an isolated, flat-topped elevation, ridge or hill, which is bounded from all sides by steep escarpments and stands distinctly above a surrounding plain. Mesas characteristically consist of flat-lying soft sedimentary rocks capped by a more resistant layer or layers of harder rock, e.g. shales overlain by sandstones. The resistant layer acts as a caprock that forms the flat summit of a mesa. The caprock can consist of either sedimentary rocks such as sandstone and limestone; dissected lava flows; or a deeply eroded duricrust. Unlike plateau, whose usage does not imply horizontal layers of bedrock, e.g. Tibetan Plateau, the term mesa applies exclusively to the landforms built of flat-lying strata. Instead, flat-topped plateaus are specifically known as tablelands.<ref name="DuszyńskiOthers2019a">Duszyński, F., Migoń, P. and Strzelecki, M.C., 2019. Escarpment retreat in sedimentary tablelands and cuesta landscapes–Landforms, mechanisms and patterns. Earth-Science Reviews, no. 102890. doi.org/10.1016/j.earscirev.2019.102890</ref>Neuendorf, Klaus K.E. Mehl, James P., Jr. Jackson, Julia A.. (2011). Glossary of Geology (5th Edition). American Geosciences Institute.
Names, definition and etymology
As noted by geologist Kirk Bryan in 1922, mesas "...stand distinctly above the surrounding country, as a table stands above the floor upon which it rests". It is from this appearance that the term mesa was adopted from the Spanish word mesa, meaning "table".
A mesa is similar to, but has a more extensive summit area than a butte. There is no agreed size limit that separates mesas from either buttes or plateaus. For example, the flat-topped mountains which are known as mesas in the Cockburn Range of North Western Australia have areas as large as . In contrast, flat topped hills with areas as small as in the Elbe Sandstone Mountains, Germany, are described as mesas.
Less strictly, a very broad, flat-topped, usually isolated hill or mountain of moderate height bounded on at least one side by a steep cliff or slope and representing an erosion remnant also have been called mesas.
In the English-language geomorphic and geologic literature, other terms for mesa have also been used. For example, in the Roraima region of Venezuela, the traditional name, tepui, from the local Pomón language, and the term table mountains have been used to describe local flat-topped mountains.Doerr, S.H., 1999. Karst-like landforms and hydrology in quartzites of the Venezuelan Guyana shield: Pseudokarst or" real" karst?. Zeitschrift fur Geomorphologie, 43(1), pp.1-17. Similar landforms in Australia are known as tablehills, table-top hills, tent hills, or jump ups (jump-ups). The German term Tafelberg''''' has also been used in the English scientific literature in the past.
Formation
Mesas form by weathering and erosion of horizontally layered rocks that have been uplifted by tectonic activity. Variations in the ability of different types of rock to resist weathering and erosion cause the weaker types of rocks to be eroded away, leaving the more resistant types of rocks topographically higher than their surroundings. This process is called differential erosion. The most resistant rock types include sandstone, conglomerate, quartzite, basalt, chert, limestone, lava flows and sills. Lava flows and sills, in particular, are very resistant to weathering and erosion, and often form the flat top, or caprock, of a mesa. The less resistant rock layers are mainly made up of shale, a softer rock that weathers and erodes more easily.
The differences in strength of various rock layers are what give mesas their distinctive shape. Less resistant rocks are eroded away on the surface into valleys, where they collect water drainage from the surrounding area, while the more resistant layers are left standing out. A large area of very resistant rock, such as a sill, may shield the layers below it from erosion while the softer rock surrounding it is eroded into valleys, thus forming a caprock.
Differences in rock type also reflect on the sides of a mesa, as instead of smooth slopes, the sides are broken into a staircase pattern called "cliff-and-bench topography". The more resistant layers form the cliffs, or stairsteps, while the less resistant layers form gentle slopes, or benches, between the cliffs. Cliffs retreat and are eventually cut off from the main cliff, or plateau, by basal sapping. When the cliff edge does not retreat uniformly but instead is indented by headward eroding streams, a section can be cut off from the main cliff, forming a mesa.
Basal sapping occurs as water flowing around the rock layers of the mesa erodes the underlying soft shale layers, either as surface runoff from the mesa top or from groundwater moving through permeable overlying layers, which leads to slumping and flowage of the shale. As the underlying shale erodes away, it can no longer support the overlying cliff layers, which collapse and retreat. When the caprock has caved away to the point where only little remains, it is known as a butte.
Examples and locations
Australia
Cockburn Range, Western Australia
Mount Conner, Northern Territory
Czechia
Děčínský Sněžník, Ústí nad Labem Region
France
Mont Aiguille, Auvergne-Rhône-Alpes
Germany
Königstein, Saxony
Lilienstein, Saxony
Papststein, Saxony
Pfaffenstein, Saxony
Quirl, Saxony
India
Several near Owk mandal, Andhra Pradesh
Iraq
Amadiya, Kurdistan Region
Ireland
Kings Mountain, County Sligo
Knocknarea, County Sligo
Knocknashee, County Sligo
Israel
Masada, Southern District
Har Qatum
Italy
Monte Santo, Sardinia
Poland
Szczeliniec Wielki, Lower Silesian Voivodeship
United Kingdom
England
Castle Folds, Cumbria
Cross Fell, Cumbria
Goldsborough Carr, County Durham
Higger Tor, South Yorkshire
Ingleborough, North Yorkshire
Pen-y-ghent, North Yorkshire
Shacklesborough, County Durham
Scotland
Healabhal Mhòr, Isle of Skye
United States
Many but not all American mesas lie within the Basin and Range Province.
Arizona
Anderson Mesa
Black Mesa
Black Mesa
Black Mesa
Black Mountain
Cummings Mesa
First Mesa
Horseshoe Mesa
Indian Mesa
Second Mesa
Arkansas
Mount Magazine
California
Redonda Mesa
Colorado
Battlement Mesa
Grand Mesa - largest flat-topped mountain in the world.
Green Mountain
Log Hill Mesa
North Table Mountain
Raton Mesa
Nevada
Mormon Mesa
Pahute Mesa
Oklahoma
Black Mesa
Mesa de Maya
Texas
Floating Mesa
Llano Estacado
Utah
Checkerboard Mesa
Crazy Quilt Mesa
Hurricane Mesa
Sams Mesa
Smith Mesa
South Caineville Mesa
Thompson Mesa
Wildcat Mesa
Wingate Mesa
Wisconsin
Gibraltar Rock
Grandad Bluff
Mile Bluff
Quincy Bluff
Rattlesnake Mound
On Mars
A transitional zone on Mars, known as fretted terrain, lies between highly cratered highlands and less cratered lowlands. The younger lowland exhibits steep walled mesas and knobs. The mesa and knobs are separated by flat lying lowlands. They are thought to form from ice-facilitated mass wasting processes from ground or atmospheric sources. The mesas and knobs decrease in size with increasing distance from the highland escarpment. The relief of the mesas range from nearly to depending on the distance they are from the escarpment.
| Physical sciences | Landforms | null |
386748 | https://en.wikipedia.org/wiki/Calcium%20hydroxide | Calcium hydroxide | Calcium hydroxide (traditionally called slaked lime) is an inorganic compound with the chemical formula Ca(OH)2. It is a colorless crystal or white powder and is produced when quicklime (calcium oxide) is mixed with water. Annually, approximately 125 million tons of calcium hydroxide are produced worldwide.
Calcium hydroxide has many names including hydrated lime, caustic lime, builders' lime, slaked lime, cal, and pickling lime. Calcium hydroxide is used in many applications, including food preparation, where it has been identified as E number E526. Limewater, also called milk of lime, is the common name for a saturated solution of calcium hydroxide.
Solubility
Calcium hydroxide is modestly soluble in water, as seen for many dihydroxides. Its solubility increases from 0.66 g/L at 100 °C to 1.89 g/L at 0 °C. Its solubility product Ksp of 5.02 at 25 °C, its dissociation in water is large enough that its solutions are basic according to the following dissolution reaction:
Ca(OH)2 → Ca2+ + 2 OH−
The solubility is affected by the common-ion effect. Its solubility drastically decreases upon addition of hydroxide or calcium sources.
Reactions
When heated to 512 °C, the partial pressure of water in equilibrium with calcium hydroxide reaches 101kPa (normal atmospheric pressure), which decomposes calcium hydroxide into calcium oxide and water:
Ca(OH)2 → CaO + H2O
When carbon dioxide is passed through limewater, the solution takes on a milky appearance due to precipitation of insoluble calcium carbonate:
Ca(OH)2 + CO2 → CaCO3 + H2O
If excess CO2 is added: the following reaction takes place:
CaCO3 + H2O + CO2 → Ca(HCO3)2
The milkiness disappears since calcium bicarbonate is water-soluble.
Calcium hydroxide reacts with aluminium. This reaction is the basis of aerated concrete. It does not corrode iron and steel, owing to passivation of their surface.
Calcium hydroxide reacts with hydrochloric acid to give calcium hydroxychloride and then calcium chloride.
In a process called sulfation, sulphur dioxide reacts with limewater:
Ca(OH)2 + SO2 → CaSO3 + H2O
Limewater is used in a process known as lime softening to reduce water hardness. It is also used as a neutralizing agent in municipal waste water treatment.
Structure and preparation
Calcium hydroxide adopts a polymeric structure, as do all metal hydroxides. The structure is identical to that of Mg(OH) (brucite structure); i.e., the cadmium iodide motif. Strong hydrogen bonds exist between the layers.
Calcium hydroxide is produced commercially by treating (slaking) quicklime with water:
Alongside the production of quicklime from limestone by calcination, this is one of the oldest known chemical reactions; evidence of prehistoric production dates back to at least 7000 BCE.
Uses
Calcium hydroxide is commonly used to prepare lime mortar.
One significant application of calcium hydroxide is as a flocculant, in water and sewage treatment. It forms a fluffy charged solid that aids in the removal of smaller particles from water, resulting in a clearer product. This application is enabled by the low cost and low toxicity of calcium hydroxide. It is also used in fresh-water treatment for raising the pH of the water so that pipes will not corrode where the base water is acidic, because it is self-regulating and does not raise the pH too much.
Another large application is in the paper industry, where it is an intermediate in the reaction in the production of sodium hydroxide. This conversion is part of the causticizing step in the Kraft process for making pulp. In the causticizing operation, burned lime is added to green liquor, which is a solution primarily of sodium carbonate and sodium sulfate produced by dissolving smelt, which is the molten form of these chemicals from the recovery furnace.
In orchard crops, calcium hydroxide is used as a fungicide. Applications of 'lime water' prevent the development of cankers caused by the fungal pathogen Neonectria galligena. The trees are sprayed when they are dormant in winter to prevent toxic burns from the highly reactive calcium hydroxide. This use is authorised in the European Union and the United Kingdom under Basic Substance regulations.
Calcium hydroxide is used in dentistry, primarily in the specialty of endodontics.
Food industry
Because of its low toxicity and the mildness of its basic properties, slaked lime is widely used in the food industry,
In USDA certified food production in plants and livestock
To clarify raw juice from sugarcane or sugar beets in the sugar industry (see carbonatation)
To process water for alcoholic beverages and soft drinks
To increase the rate of Maillard reactions (pretzels)
Pickle cucumbers and other foods
To make Chinese century eggs
In maize preparation: removes the cellulose hull of maize kernels (see nixtamalization)
To clear a brine of carbonates of calcium and magnesium in the manufacture of salt for food and pharmaceutical uses
In fortifying (Ca supplement) fruit drinks, such as orange juice, and infant formula
As a substitute for baking soda in making papadam
In the removal of carbon dioxide from controlled atmosphere produce storage rooms
In the preparation of mushroom growing substrates
Native American uses
In Nahuatl, the language of the Aztecs, the word for calcium hydroxide is nextli. In a process called nixtamalization, maize is cooked with nextli to become , also known as hominy. Nixtamalization significantly increases the bioavailability of niacin (vitamin B3), and is also considered tastier and easier to digest. Nixtamal is often ground into a flour, known as masa, which is used to make tortillas and tamales.
Limewater is used in the preparation of maize for corn tortillas and other culinary purposes using a process known as nixtamalization. Nixtamalization makes the niacin nutritionally available and prevents pellagra. Traditionally lime water was used in Taiwan and China to preserve persimmon and to remove astringency.
In chewing coca leaves, calcium hydroxide is usually chewed alongside to keep the alkaloid stimulants chemically available for absorption by the body. Similarly, Native Americans traditionally chewed tobacco leaves with calcium hydroxide derived from burnt mollusc shells to enhance the effects. It has also been used by some indigenous South American tribes as an ingredient in yopo, a psychedelic snuff prepared from the beans of some Anadenanthera species.
Asian uses
Calcium hydroxide is typically added to a bundle of areca nut and betel leaf called "paan" to keep the alkaloid stimulants chemically available to enter the bloodstream via sublingual absorption.
It is used in making naswar (also known as nass or niswar), a type of dipping tobacco made from fresh tobacco leaves, calcium hydroxide (chuna/choona or soon), and wood ash. It is consumed most in the Pathan diaspora, Afghanistan, Pakistan, India and Bangladesh. Villagers also use calcium hydroxide to paint their mud houses in Afghanistan, Pakistan and India.
Hobby uses
In buon fresco painting, limewater is used as the colour solvent to apply on fresh plaster. Historically, it is known as the paint whitewash.
Limewater is widely used by marine aquarists as a primary supplement of calcium and alkalinity for reef aquariums. Corals of order Scleractinia build their endoskeletons from aragonite (a polymorph of calcium carbonate). When used for this purpose, limewater is usually referred to as Kalkwasser. It is also used in tanning and making parchment. The lime is used as a dehairing agent based on its alkaline properties.
Personal care and adornment
Treating one's hair with limewater causes it to stiffen and bleach, with the added benefit of killing any lice or mites living there. Diodorus Siculus described the Celts as follows:
"Their aspect is terrifying... They are very tall in stature, with rippling muscles under clear white skin. Their hair is blond, but not only naturally so: they bleach it, to this day, artificially, washing it in lime and combing it back from their foreheads. They look like wood-demons, their hair thick and shaggy like a horse's mane. Some of them are clean-shaven, but others – especially those of high rank, shave their cheeks but leave a moustache that covers the whole mouth...".
Calcium hydroxide is also applied in a leather process called liming.
Interstellar medium
The ion CaOH+ has been detected in the atmosphere of S-type stars.
Limewater
Limewater is a saturated aqueous solution of calcium hydroxide. Calcium hydroxide is sparsely soluble at room temperature in water (1.5 g/L at 25 °C). "Pure" (i.e. less than or fully saturated) limewater is clear and colorless, with a slight earthy smell and an astringent/bitter taste. It is basic in nature with a pH of 12.4. Limewater is named after limestone, not the lime fruit. Limewater may be prepared by mixing calcium hydroxide (Ca(OH)2) with water and removing excess undissolved solute (e.g. by filtration). When excess calcium hydroxide is added (or when environmental conditions are altered, e.g. when its temperature is raised sufficiently), there results a milky solution due to the homogeneous suspension of excess calcium hydroxide. This liquid has been known traditionally as milk of lime.
Health risks
Unprotected exposure to Ca(OH)2, as with any strong base, can cause skin burns, but it is not acutely toxic.
| Physical sciences | Hydroxy anion | Chemistry |
387038 | https://en.wikipedia.org/wiki/Interdictor | Interdictor | An interdictor is a type of attack aircraft or tactical bomber that operates far behind enemy lines, with the express intent of air interdiction of the enemy's military targets, most notably those involved in logistics.
Interdiction
Interdiction prevents or delays enemy forces and supplies from reaching the battlefront; the term has generally fallen from use. The strike fighter is a closely related concept, but puts more emphasis on air-to-air combat capabilities as a multirole combat aircraft. Larger versions of the interdictor concept are generally referred to as "penetrators".
Operation
In the post-war era, the RAF introduced interdictor variants of their English Electric Canberra jet bomber, as aircraft were released from the strategic bombing role as they were replaced by the new V bombers. Desiring a more modern aircraft for this role, development of the BAC TSR-2 (from "Tactical Strike and Reconnaissance, Mach 2") began, but this program was later cancelled. The US began development of a similar aircraft around the same time, which emerged as the General Dynamics F-111. The failure of the TSR-2 and a desire by other European nations for a similar design led to the Multi Role Combat Aircraft (MRCA) program, although operating over shorter ranges in the European theatre which was realised as the Panavia Tornado Interdictor/Strike (IDS). The Soviet Sukhoi Su-24 emerged in the early 1970s.
In order to safely traverse a heavily defended front line, they flew at very low altitudes (in some cases having to pull up to clear power lines) to use terrain masking to protect them from enemy radar-guided weapons. Flying at low altitude also demands much greater fuel use, and thus interdictor aircraft were generally fairly large.
List of interdictor aircraft
- did not enter service
| Technology | Military aviation | null |
387063 | https://en.wikipedia.org/wiki/Dwarf%20elliptical%20galaxy | Dwarf elliptical galaxy | Dwarf elliptical galaxies (dEs) are elliptical galaxies that are smaller than ordinary elliptical galaxies. They are quite common in galaxy groups and clusters, and are usually companions to other galaxies.
Examples
"Dwarf elliptical" galaxies should not be confused with the rare "compact elliptical" galaxy class, of which M32, a satellite of the Andromeda Galaxy, is the prototype.
In 1944 Walter Baade confirmed dwarf ellipticals NGC 147 and NGC 185 as members of the Local Group by resolving them into individual stars, thanks to their relatively little distance. In the 1950s, dEs were also discovered in the nearby Fornax and Virgo clusters.
Relation to other elliptical galaxy types
Dwarf elliptical galaxies have blue absolute magnitudes within the range fainter than ordinary elliptical galaxies.
The surface brightness profiles of ordinary elliptical galaxies was formerly approximated using de Vaucouleur's model, while dEs were approximated with an exponentially declining surface brightness profile. However, both types fit well by a more general function, known as Sersic's model, and there is a continuity of Sersic index (which quantifies the shape of the surface brightness profile) as a function of galaxy luminosity. This is interpreted as showing that dwarf elliptical and ordinary elliptical galaxies belong to a single sequence.
An even-fainter type of elliptical-like galaxies, called dwarf spheroidal galaxies, may be a genuinely distinct class.
Origins
Dwarf ellipticals may be primordial objects. Within the currently favoured cosmological Lambda-CDM model, small objects (consisting of dark matter and gas) were the first to form. Because of their mutual gravitational attraction, some of these will coalesce and merge, forming more massive objects. Further mergers lead to ever more massive objects.
The process of coalescence could lead to the present-day galaxies, and has been called "hierarchical merging". If this hypothesis is correct, dwarf galaxies may be the building blocks of today's large spiral galaxies, which in turn are thought to merge to form giant ellipticals.
An alternative suggestion is that dEs could be the remnants of low-mass spiral galaxies that obtained a rounder shape through the action of repeated gravitational interactions with ordinary galaxies within a cluster. This process of changing a galaxy's morphology by interactions, and the removal of much of its stellar disk, has been called "galaxy harassment". Evidence for this latter hypothesis has been claimed due to stellar disks and weak spiral arms seen in some dEs. Under this alternative hypothesis, the anaemic spiral arms and disk are a modified version of the original stellar disk of the now transformed spiral galaxy.
At the same time, the galaxy harassment scenario can not be the full picture. The highly isolated dwarf elliptical galaxy CG 611 possesses the same physical attributes as dE galaxies in clusters – such as coherent rotation and faint spiral arms – attributes that were previously assumed to provide evidence that dE galaxies were once spiral galaxies prior to a transformation process requiring immersion with a cluster of galaxies. CG 611 has a gas disk which counter-rotates to its stellar disk, clearly revealing that this dE galaxy's disk is growing via accretion events. If CG 611 was to fall into a galaxy cluster, ram-pressure stripping by the cluster's halo of hot X-ray gas would strip away CG 611's gas disk and leave a gas-poor dE galaxy that immediately resembles the other dEs in the cluster. That is, no removal of stars nor re-shaping of the galaxy within the dense galaxy cluster environment would be required, undermining the idea that dE galaxies were once spiral galaxies.
| Physical sciences | Galaxy classification | Astronomy |
387369 | https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene%20Thermal%20Maximum | Paleocene–Eocene Thermal Maximum | The Paleocene–Eocene thermal maximum (PETM), alternatively ”Eocene thermal maximum 1 (ETM1)“ and formerly known as the "Initial Eocene" or “Late Paleocene thermal maximum", was a geologically brief time interval characterized by a 5–8 °C global average temperature rise and massive input of carbon into the ocean and atmosphere. The event began, now formally codified, at the precise time boundary between the Paleocene and Eocene geological epochs. The exact age and duration of the PETM remain uncertain, but it occurred around 55.8 million years ago (Ma) and lasted about 200 thousand years (Ka).
The PETM arguably represents our best past analogue for which to understand how global warming and the carbon cycle operate in a greenhouse world. The time interval is marked by a prominent negative excursion in carbon stable isotope () records from around the globe; more specifically, a large decrease in the 13C/12C ratio of marine and terrestrial carbonates and organic carbon has been found and correlated across hundreds of locations. The magnitude and timing of the PETM () excursion, which attest to the massive past carbon release to our ocean and atmosphere, and the source of this carbon remain topics of considerable current geoscience research.
What has become clear over the last few decades: Stratigraphic sections across the PETM reveal numerous changes beyond warming and carbon emission. Consistent with an Epoch boundary, Fossil records of many organisms show major turnovers. In the marine realm, a mass extinction of benthic foraminifera, a global expansion of subtropical dinoflagellates, and an appearance of excursion taxa, including within planktic foraminifera planktic foraminifera and calcareous nannofossils, all occurred during the beginning stages of the PETM. On land, many modern mammal orders (including primates) suddenly appear in Europe and in North America.
Setting
The configuration of oceans and continents was somewhat different during the early Paleogene relative to the present day. The Panama Isthmus did not yet connect North America and South America, and this allowed direct low-latitude circulation between the Pacific and Atlantic Oceans. The Drake Passage, which now separates South America and Antarctica, was closed, and this perhaps prevented thermal isolation of Antarctica. The Arctic was also more restricted. Although various proxies for past atmospheric concentrations across the Cenozoic do not agree in absolute terms, all suggest that levels in the early Paleogene before and after the PETM were much higher than at present-day. In any case, significant terrestrial ice sheets and sea-ice did not exist during the late Paleocene through early Eocene
Earth surface temperatures gradually increased by about 6 °C from the late Paleocene through the early Eocene. Superimposed on this long-term, gradual warming were at least three (and probably more) "hyperthermals". These can be defined as geologically brief (<200,000 year) events characterized by rapid global warming, major changes in the environment, and massive carbon addition. Though not the first within the Cenozoic, the PETM was the most extreme hyperthermal, and stands out as a major change in the lithologic, biotic and geochemical composition of sediment in hundreds of records across Earth. Other hyperthermals clearly occurred at approximately 53.7 Ma (now called ETM-2 and also referred to as H-1, or the Elmo event) and at about 53.6 Ma (H-2), 53.3 (I-1), 53.2 (I-2) and 52.8 Ma (informally called K, X or ETM-3). The number, nomenclature, absolute ages, and relative global impact of the Eocene hyperthermals remain a source of current research. Whether they only occurred during the long-term warming, and whether they are causally related to apparently similar events in older intervals of the geological record (e.g. the Toarcian turnover of the Jurassic) are open issues.
Global warming
A study in 2020 estimated the global mean surface temperature (GMST) with 66% confidence during the latest Paleocene (c. 57 Ma) as , PETM (56 Ma) as and Early Eocene Climatic Optimum (EECO) (53.3 to 49.1 Ma) as . Estimates of the amount of average global temperature rise at the start of the PETM range from approximately 3 to 6 °C to between 5 and 8 °C. This warming was superimposed on "long-term" early Paleogene warming, and is based on several lines of evidence. There is a prominent (>1‰) negative excursion in the of foraminifera shells, both those made in surface and deep ocean water. Because there was little or no polar ice in the early Paleogene, the shift in very probably signifies a rise in ocean temperature. The temperature rise is also supported by the spread of warmth-loving taxa to higher latitudes, changes in plant leaf shape and size, the Mg/Ca ratios of foraminifera, and the ratios of certain organic compounds, such as TEXH86.
Proxy data from Esplugafereda in northeastern Spain shows a rapid +8 °C temperature rise, in accordance with existing regional records of marine and terrestrial environments. Southern California had a mean annual temperature of about 17 °C ± 4.4 °C. In Antarctica, at least part of the year saw minimum temperatures of 15 °C.
TEXH86 values indicate that the average sea surface temperature (SST) reached over in the tropics during the PETM, enough to cause heat stress even in organisms resistant to extreme thermal stress, such as dinoflagellates, of which a significant number of species went extinct. Oxygen isotope ratios from Tanzania suggest that tropical SSTs may have been even higher, exceeding 40 °C. Ocean Drilling Program Site 1209 from the tropical western Pacific shows an increase in SST from 34 °C before the PETM to ~40 °C. In the eastern Tethys, SSTs rose by 3 to 5 °C. Low latitude Indian Ocean Mg/Ca records show seawater at all depths warmed by about 4-5 °C. In the Pacific Ocean, tropical SSTs increased by about 4-5 °C. TEXL86 values from deposits in New Zealand, then located between 50°S and 60°S in the southwestern Pacific, indicate SSTs of to , an increase of over from an average of to at the boundary between the Selandian and Thanetian. The extreme warmth of the southwestern Pacific extended into the Australo-Antarctic Gulf. Sediment core samples from the East Tasman Plateau, then located at a palaeolatitude of ~65 °S, show an increase in SSTs from ~26 °C to ~33 °C during the PETM. In the North Sea, SSTs jumped by 10 °C, reaching highs of ~33 °C, while in the West Siberian Sea, SSTs climbed to ~27 °C.
Certainly, the central Arctic Ocean was ice-free before, during, and after the PETM. This can be ascertained from the composition of sediment cores recovered during the Arctic Coring Expedition (ACEX) at 87°N on Lomonosov Ridge. Moreover, temperatures increased during the PETM, as indicated by the brief presence of subtropical dinoflagellates (Apectodinium spp.}, and a marked increase in TEX86. The latter record is intriguing, though, because it suggests a 6 °C (11 °F) rise from ~ before the PETM to ~ during the PETM. Assuming the TEX86 record reflects summer temperatures, it still implies much warmer temperatures on the North Pole compared to the present day, but no significant latitudinal amplification relative to surrounding time.
The above considerations are important because, in many global warming simulations, high latitude temperatures increase much more at the poles through an ice–albedo feedback. It may be the case, however, that during the PETM, this feedback was largely absent because of limited polar ice, so temperatures on the Equator and at the poles increased similarly. Notable is the absence of documented greater warming in polar regions compared to other regions. This implies a non-existing ice-albedo feedback, suggesting no sea or land ice was present in the late Paleocene.
Precise limits on the global temperature rise during the PETM and whether this varied significantly with latitude remain open issues. Oxygen isotope and Mg/Ca of carbonate shells precipitated in surface waters of the ocean are commonly used measurements for reconstructing past temperature; however, both paleotemperature proxies can be compromised at low latitude locations, because re-crystallization of carbonate on the seafloor renders lower values than when formed. On the other hand, these and other temperature proxies (e.g., TEX86) are impacted at high latitudes because of seasonality; that is, the "temperature recorder" is biased toward summer, and therefore higher values, when the production of carbonate and organic carbon occurred.
Carbon cycle disturbance
Clear evidence for massive addition of 13C-depleted carbon at the onset of the PETM comes from two observations. First, a prominent negative excursion in the carbon isotope composition () of carbon-bearing phases characterizes the PETM in numerous (>130) widespread locations from a range of environments. Second, carbonate dissolution marks the PETM in sections from the deep sea.
The total mass of carbon injected to the ocean and atmosphere during the PETM remains the source of debate. In theory, it can be estimated from the magnitude of the negative carbon isotope excursion (CIE), the amount of carbonate dissolution on the seafloor, or ideally both. However, the shift in the across the PETM depends on the location and the carbon-bearing phase analyzed. In some records of bulk carbonate, it is about 2‰ (per mil); in some records of terrestrial carbonate or organic matter it exceeds 6‰. Carbonate dissolution also varies throughout different ocean basins. It was extreme in parts of the north and central Atlantic Ocean, but far less pronounced in the Pacific Ocean. With available information, estimates of the carbon addition range from about 2,000 to 7,000 gigatons.
Timing of carbon addition and warming
The timing of the PETM excursion is of considerable interest. This is because the total duration of the CIE, from the rapid drop in through the near recovery to initial conditions, relates to key parameters of our global carbon cycle, and because the onset provides insight to the source of 13C-depleted .
The total duration of the CIE can be estimated in several ways. The iconic sediment interval for examining and dating the PETM is a core recovered in 1987 by the Ocean Drilling Program at Hole 690B at Maud Rise in the South Atlantic Ocean. At this location, the PETM CIE, from start to end, spans about 2 m. Long-term age constraints, through biostratigraphy and magnetostratigraphy, suggest an average Paleogene sedimentation rate of about 1.23 cm/1,000yrs. Assuming a constant sedimentation rate, the entire event, from onset though termination, was therefore estimated at 200,000 years. Subsequently, it was noted that the CIE spanned 10 or 11 subtle cycles in various sediment properties, such as Fe content. Assuming these cycles represent precession, a similar but slightly longer age was calculated by Rohl et al. 2000. If a massive amount of 13C-depleted is rapidly injected into the modern ocean or atmosphere and projected into the future, a ~200,000 year CIE results because of slow flushing through quasi steady-state inputs (weathering and volcanism) and outputs (carbonate and organic) of carbon. A different study, based on a revised orbital chronology and data from sediment cores in the South Atlantic and the Southern Ocean, calculated a slightly shorter duration of about 170,000 years.
A ~200,000 year duration for the CIE is estimated from models of global carbon cycling.
Age constraints at several deep-sea sites have been independently examined using 3He contents, assuming the flux of this cosmogenic nuclide is roughly constant over short time periods. This approach also suggests a rapid onset for the PETM CIE (<20,000 years). However, the 3He records support a faster recovery to near initial conditions (<100,000 years) than predicted by flushing via weathering inputs and carbonate and organic outputs.
There is other evidence to suggest that warming predated the excursion by some 3,000 years.
Some authors have suggested that the magnitude of the CIE may be underestimated due to local processes in many sites causing a large proportion of allochthonous sediments to accumulate in their sedimentary rocks, contaminating and offsetting isotopic values derived from them. Organic matter degradation by microbes has also been implicated as a source of skewing of carbon isotopic ratios in bulk organic matter.
Effects
Precipitation
The climate would also have become much wetter, with the increase in evaporation rates peaking in the tropics. Deuterium isotopes reveal that much more of this moisture was transported polewards than normal. Warm weather would have predominated as far north as the Polar basin. Finds of fossils of Azolla floating ferns in polar regions indicate subtropic temperatures at the poles. Central China during the PETM hosted dense subtropical forests as a result of the significant increase in rates of precipitation in the region, with average temperatures between 21 °C and 24 °C and mean annual precipitation ranging from 1,396 to 1,997 mm. Similarly, Central Asia became wetter as proto-monsoonal rainfall penetrated farther inland. Very high precipitation is also evidenced in the Cambay Shale Formation of India by the deposition of thick lignitic seams as a consequence of increased soil erosion and organic matter burial. Precipitation rates in the North Sea likewise soared during the PETM. In Cap d'Ailly, in present-day Normandy, a transient dry spell occurred just before the negative CIE, after which much moister conditions predominated, with the local environment transitioning from a closed marsh to an open, eutrophic swamp with frequent algal blooms. Precipitation patterns became highly unstable along the New Jersey Shelf. In the Rocky Mountain Interior, precipitation locally declined, however, as the interior of North America became more seasonally arid. Along the central California coast, conditions also became drier overall, although precipitation did increase in the summer months. The drying of western North America is explained by the northward shift of low-level jets and atmospheric rivers. East African sites display evidence of aridity punctuated by seasonal episodes of potent precipitation, revealing the global climate during the PETM not to be universally humid. The proto-Mediterranean coastlines of the western Tethys became drier. Evidence from Forada in northeastern Italy suggests that arid and humid climatic intervals alternated over the course of the PETM concomitantly with precessional cycles in mid-latitudes, and that overall, net precipitation over the central-western Tethys Ocean decreased.
Ocean
The amount of freshwater in the Arctic Ocean increased, in part due to Northern Hemisphere rainfall patterns, fueled by poleward storm track migrations under global warming conditions. The flux of freshwater entering the oceans increased drastically during the PETM, and continued for a time after the PETM's termination.
Anoxia
The PETM generated the only oceanic anoxic event (OAE) of the Cenozoic. Oxygen depletion was achieved through a combination of elevated seawater temperatures, water column stratification, and oxidation of methane released from undersea clathrates. In parts of the oceans, especially the North Atlantic Ocean, bioturbation was absent. This may be due to bottom-water anoxia or due to changing ocean circulation patterns changing the temperatures of the bottom water. However, many ocean basins remained bioturbated through the PETM. Iodine to calcium ratios suggest oxygen minimum zones in the oceans expanded vertically and possibly also laterally. Water column anoxia and euxinia was most prevalent in restricted oceanic basins, such as the Arctic and Tethys Oceans. Euxinia struck the epicontinental North Sea Basin as well, as shown by increases in sedimentary uranium, molybdenum, sulphur, and pyrite concentrations, along with the presence of sulphur-bound isorenieratane. The Gulf Coastal Plain was also affected by euxinia. The Atlantic Coastal Plain, well oxygenated during the Late Palaeocene, became highly dysoxic during the PETM. The tropical surface oceans, in contrast, remained oxygenated over the course of the hyperthermal event.
It is possible that during the PETM's early stages, anoxia helped to slow down warming through carbon drawdown via organic matter burial. A pronounced negative lithium isotope excursion in both marine carbonates and local weathering inputs suggests that weathering and erosion rates increased during the PETM, generating an increase in organic carbon burial, which acted as a negative feedback on the PETM's severe global warming.
Sea level
Along with the global lack of ice, the sea level would have risen due to thermal expansion. Evidence for this can be found in the shifting palynomorph assemblages of the Arctic Ocean, which reflect a relative decrease in terrestrial organic material compared to marine organic matter. A significant marine transgression took place in the Indian Subcontinent. In the Tarim Sea, sea levels rose by 20-50 metres.
Currents
At the start of the PETM, the ocean circulation patterns changed radically in the course of under 5,000 years. Global-scale current directions reversed due to a shift in overturning from the Southern Hemisphere to Northern Hemisphere. This "backwards" flow persisted for 40,000 years. Such a change would transport warm water to the deep oceans, enhancing further warming. The major biotic turnover among benthic foraminifera has been cited as evidence of a significant change in deep water circulation.
Acidification
Ocean acidification occurred during the PETM, causing the calcite compensation depth to shoal. The lysocline marks the depth at which carbonate starts to dissolve (above the lysocline, carbonate is oversaturated): today, this is at about 4 km, comparable to the median depth of the oceans. This depth depends on (among other things) temperature and the amount of dissolved in the ocean. Adding initially raises the lysocline, resulting in the dissolution of deep water carbonates. This deep-water acidification can be observed in ocean cores, which show (where bioturbation has not destroyed the signal) an abrupt change from grey carbonate ooze to red clays (followed by a gradual grading back to grey). It is far more pronounced in North Atlantic cores than elsewhere, suggesting that acidification was more concentrated here, related to a greater rise in the level of the lysocline. Corrosive waters may have then spilled over into other regions of the world ocean from the North Atlantic. Model simulations show acidic water accumulation in the deep North Atlantic at the onset of the event. Acidification of deep waters, and the later spreading from the North Atlantic can explain spatial variations in carbonate dissolution. In parts of the southeast Atlantic, the lysocline rose by 2 km in just a few thousand years. Evidence from the tropical Pacific Ocean suggests a minimum lysocline shoaling of around 500 m at the time of this hyperthermal. Acidification may have increased the efficiency of transport of photic zone water into the ocean depths, thus partially acting as a negative feedback that retarded the rate of atmospheric carbon dioxide buildup. Also, diminished biocalcification inhibited the removal of alkalinity from the deep ocean, causing an overshoot of calcium carbonate deposition once net calcium carbonate production resumed, helping restore the ocean to its state before the PETM. As a consequence of coccolithophorid blooms enabled by enhanced runoff, carbonate was removed from seawater as the Earth recovered from the negative carbon isotope excursion, thus acting to ameliorate ocean acidification.
Life
Stoichiometric magnetite () particles were obtained from PETM-age marine sediments. The study from 2008 found elongate prism and spearhead crystal morphologies, considered unlike any magnetite crystals previously reported, and are potentially of biogenic origin. These biogenic magnetite crystals show unique gigantism, and probably are of aquatic origin. The study suggests that development of thick suboxic zones with high iron bioavailability, the result of dramatic changes in weathering and sedimentation rates, drove diversification of magnetite-forming organisms, likely including eukaryotes. Biogenic magnetites in animals have a crucial role in geomagnetic field navigation.
Ocean
The PETM is accompanied by significant changes in the diversity of calcareous nannofossils and benthic and planktonic foraminifera. A mass extinction of 35–50% of benthic foraminifera (especially in deeper waters) occurred over the course of ~1,000 years, with the group suffering more during the PETM than during the dinosaur-slaying K-T extinction. At the onset of the PETM, benthic foraminiferal diversity dropped by 30% in the Pacific Ocean, while at Zumaia in what is now Spain, 55% of benthic foraminifera went extinct over the course of the PETM, though this decline was not ubiquitous to all sites; Himalayan platform carbonates show no major change in assemblages of large benthic foraminifera at the onset of the PETM; their decline came about towards the end of the event. A decrease in diversity and migration away from the oppressively hot tropics indicates planktonic foraminifera were adversely affected as well. The Lilliput effect is observed in shallow water foraminifera, possibly as a response to decreased surficial water density or diminished nutrient availability. Populations of planktonic foraminifera bearing photosymbionts increased. Extinction rates among calcareous nannoplankton increased, but so did origination rates. In the Kerguelen Plateau, nannoplankton productivity sharply declined at the onset of the negative excursion but was elevated in its aftermath. The nannoplankton genus Fasciculithus went extinct, most likely as a result of increased surface water oligotrophy; the genera Sphenolithus, Zygrhablithus, Octolithus suffered badly too.
Samples from the tropical Atlantic show that overall, dinocyst abundance diminished sharply. Contrarily, thermophilic dinoflagellates bloomed, particularly Apectodinium. This acme in Apectodinium abundance is used as a biostratigraphic marker defining the PETM. The fitness of Apectodinium homomorphum stayed constant over the PETM while that of others declined.
Radiolarians grew in size over the PETM.
Colonial corals, sensitive to rising temperatures, declined during the PETM, being replaced by larger benthic foraminifera. Aragonitic corals were greatly hampered in their ability to grow by the acidification of the ocean and eutrophication in surficial waters. Overall, coral framework-building capacity was greatly diminished.
The deep-sea extinctions are difficult to explain, because many species of benthic foraminifera in the deep-sea are cosmopolitan, and can find refugia against local extinction. General hypotheses such as a temperature-related reduction in oxygen availability, or increased corrosion due to carbonate undersaturated deep waters, are insufficient as explanations. Acidification may also have played a role in the extinction of the calcifying foraminifera, and the higher temperatures would have increased metabolic rates, thus demanding a higher food supply. Such a higher food supply might not have materialized because warming and increased ocean stratification might have led to declining productivity, along with increased remineralization of organic matter in the water column before it reached the benthic foraminifera on the sea floor. The only factor global in extent was an increase in temperature. Regional extinctions in the North Atlantic can be attributed to increased deep-sea anoxia, which could be due to the slowdown of overturning ocean currents, or the release and rapid oxidation of large amounts of methane.
In shallower waters, it's undeniable that increased levels result in a decreased oceanic pH, which has a profound negative effect on corals. Experiments suggest it is also very harmful to calcifying plankton. However, the strong acids used to simulate the natural increase in acidity which would result from elevated concentrations may have given misleading results, and the most recent evidence is that coccolithophores (E. huxleyi at least) become more, not less, calcified and abundant in acidic waters. No change in the distribution of calcareous nannoplankton such as the coccolithophores can be attributed to acidification during the PETM. Nor was the abundance of calcareous nannoplankton controlled by changes in acidity, with local variations in nutrient availability and temperature playing much greater roles; diversity changes in calcareous nannoplankton in the Southern Ocean and at the Equator were most affected by temperature changes, whereas in much of the rest of the open ocean, changes in nutrient availability were their dominant drivers. Acidification did lead to an abundance of heavily calcified algae and weakly calcified forams. The calcareous nannofossil species Neochiastozygus junctus thrived; its success is attributable to enhanced surficial productivity caused by enhanced nutrient runoff. Eutrophication at the onset of the PETM precipitated a decline among K-strategist large foraminifera, though they rebounded during the post-PETM oligotrophy coevally with the demise of low-latitude corals.
A study published in May 2021 concluded that fish thrived in at least some tropical areas during the PETM, based on discovered fish fossils including Mene maculata at Ras Gharib, Egypt.
Land
Humid conditions caused migration of modern Asian mammals northward, dependent on the climatic belts. Uncertainty remains for the timing and tempo of migration. Terrestrial animals suffered mass mortality due to toxigenic cyanobacterial blooms enkindled by the extreme heat.
The increase in mammalian abundance is intriguing. Increased global temperatures may have promoted dwarfing – which may have encouraged speciation. Major dwarfing occurred early in the PETM, with further dwarfing taking place during the middle of the hyperthermal. The dwarfing of various mammal lineages led to further dwarfing in other mammals whose reduction in body size was not directly induced by the PETM. Many major mammalian clades – including hyaenodontids, artiodactyls, perissodactyls, and primates – appeared and spread around the globe 13,000 to 22,000 years after the initiation of the PETM. It is possible that the Indian Subcontinent acted as a diversity hub from which mammalian lineages radiated into Africa and the continents of the Northern Hemisphere. Multiple Eurasian mammal orders invaded North America, but because niche space was not saturated, these had little effect on overall community structure.
The diversity of insect herbivory, as measured by the amount and diversity of damage to plants caused by insects, increased during the PETM in correlation with global warming. The ant genus Gesomyrmex radiated across Eurasia during the PETM. As with mammals, soil-dwelling invertebrates are observed to have dwarfed during the PETM.
A profound change in terrestrial vegetation across the globe is associated with the PETM. Across all regions, floras from the latest Palaeocene are highly distinct from those of the PETM and the Early Eocene. The Arctic became dominated by palms and broadleaf forests. The Gulf coast of central Texas was covered in tropical rainforests and tropical seasonal forests.
Geologic effects
Sediment deposition changed significantly at many outcrops and in many drill cores spanning this time interval. During the PETM, sediments are enriched with kaolinite from a detrital source due to denudation (initial processes such as volcanoes, earthquakes, and plate tectonics). Increased precipitation and enhanced erosion of older kaolinite-rich soils and sediments may have been responsible for this. Increased weathering from the enhanced runoff formed thick paleosoil enriched with carbonate nodules (Microcodium like), and this suggests a semi-arid climate. Unlike during lesser, more gradual hyperthermals, glauconite authigenesis was inhibited.
The sedimentological effects of the PETM lagged behind the carbon isotope shifts. In the Tremp-Graus Basin of northern Spain, fluvial systems grew and rates of deposition of alluvial sediments increased with a lag time of around 3,800 years after the PETM.
At some marine locations (mostly deep-marine), sedimentation rates must have decreased across the PETM, presumably because of carbonate dissolution on the seafloor; at other locations (mostly shallow-marine), sedimentation rates must have increased across the PETM, presumably because of enhanced delivery of riverine material during the event.
Possible causes
Discriminating between different possible causes of the PETM is difficult. Temperatures were rising globally at a steady pace, and a mechanism must be invoked to produce an instantaneous spike which may have been accentuated or catalyzed by positive feedback (or activation of "tipping or points"). The biggest aid in disentangling these factors comes from a consideration of the carbon isotope mass balance. We know the entire exogenic carbon cycle (i.e. the carbon contained within the oceans and atmosphere, which can change on short timescales) underwent a −0.2 % to −0.3 % perturbation in , and by considering the isotopic signatures of other carbon reserves, can consider what mass of the reserve would be necessary to produce this effect. The assumption underpinning this approach is that the mass of exogenic carbon was the same in the Paleogene as it is today – something which is very difficult to confirm.
Eruption of large kimberlite field
Although the cause of the initial warming has been attributed to a massive injection of carbon ( and/or CH4) into the atmosphere, the source of the carbon has yet to be found. The emplacement of a large cluster of kimberlite pipes at ~56 Ma in the Lac de Gras region of northern Canada may have provided the carbon that triggered early warming in the form of exsolved magmatic . Calculations indicate that the estimated 900–1,100 Pg of carbon required for the initial approximately 3 °C of ocean water warming associated with the Paleocene-Eocene thermal maximum could have been released during the emplacement of a large kimberlite cluster. The transfer of warm surface ocean water to intermediate depths led to thermal dissociation of seafloor methane hydrates, providing the isotopically depleted carbon that produced the carbon isotopic excursion. The coeval ages of two other kimberlite clusters in the Lac de Gras field and two other early Cenozoic hyperthermals indicate that degassing during kimberlite emplacement is a plausible source of the responsible for these sudden global warming events.
Volcanic activity
North Atlantic Igneous Province
One of the leading candidates for the cause of the observed carbon cycle disturbances and global warming is volcanic activity associated with the North Atlantic Igneous Province (NAIP), which is believed to have released more than 10,000 gigatons of carbon during the PETM based on the relatively isotopically heavy values of the initial carbon addition. Mercury anomalies during the PETM point to massive volcanism during the event. On top of that, increases in ∆199Hg show intense volcanism was concurrent with the beginning of the PETM. Osmium isotopic anomalies in Arctic Ocean sediments dating to the PETM have been interpreted as evidence of a volcanic cause of this hyperthermal.
Intrusions of hot magma into carbon-rich sediments may have triggered the degassing of isotopically light methane in sufficient volumes to cause global warming and the observed isotope anomaly. This hypothesis is documented by the presence of extensive intrusive sill complexes and thousands of kilometer-sized hydrothermal vent complexes in sedimentary basins on the mid-Norwegian margin and west of Shetland. This hydrothermal venting occurred at shallow depths, enhancing its ability to vent gases into the atmosphere and influence the global climate. Volcanic eruptions of a large magnitude can impact global climate, reducing the amount of solar radiation reaching the Earth's surface, lowering temperatures in the troposphere, and changing atmospheric circulation patterns. Large-scale volcanic activity may last only a few days, but the massive outpouring of gases and ash can influence climate patterns for years. Sulfuric gases convert to sulfate aerosols, sub-micron droplets containing about 75 percent sulfuric acid. Following eruptions, these aerosol particles can linger as long as three to four years in the stratosphere. Furthermore, phases of volcanic activity could have triggered the release of methane clathrates and other potential feedback loops. NAIP volcanism influenced the climatic changes of the time not only through the addition of greenhouse gases but also by changing the bathymetry of the North Atlantic. The connection between the North Sea and the North Atlantic through the Faroe-Shetland Basin was severely restricted, as was its connection to it by way of the English Channel.
Later phases of NAIP volcanic activity may have caused the other hyperthermal events of the Early Eocene as well, such as ETM2.
Other volcanic activity
It has also been suggested that volcanic activity around the Caribbean may have disrupted the circulation of oceanic currents, amplifying the magnitude of climate change.
Orbital forcing
The presence of later (smaller) warming events of a global scale, such as the Elmo horizon (aka ETM2), has led to the hypothesis that the events repeat on a regular basis, driven by maxima in the 400,000 and 100,000 year eccentricity cycles in the Earth's orbit. Cores from Howard's Tract, Maryland indicate the PETM occurred as a result of an extreme in axial precession during an orbital eccentricity maximum. The current warming period is expected to last another 50,000 years due to a minimum in the eccentricity of the Earth's orbit. Orbital increase in insolation (and thus temperature) would force the system over a threshold and unleash positive feedbacks. The orbital forcing hypothesis has been challenged by a study finding the PETM to have coincided with a minimum in the ~400 kyr eccentricity cycle, inconsistent with a proposed orbital trigger for the hyperthermal.
Comet impact
One theory holds that a 12C-rich comet struck the earth and initiated the warming event. A cometary impact coincident with the P/E boundary can also help explain some enigmatic features associated with this event, such as the iridium anomaly at Zumaia, the abrupt appearance of a localized kaolinitic clay layer with abundant magnetic nanoparticles, and especially the nearly simultaneous onset of the carbon isotope excursion and the thermal maximum.
A key feature and testable prediction of a comet impact is that it should produce virtually instantaneous environmental effects in the atmosphere and surface ocean with later repercussions in the deeper ocean. Even allowing for feedback processes, this would require at least 100 gigatons of extraterrestrial carbon. Such a catastrophic impact should have left its mark on the globe. A clay layer of 5-20m thickness on the coastal shelf of New Jersey contained unusual amounts of magnetite, but it was found to have formed 9-18 kyr too late for these magnetic particles to have been a result of a comet's impact, and the particles had a crystal structure which was a signature of magnetotactic bacteria rather than an extraterrestrial origin. However, recent analyses have shown that isolated particles of non-biogenic origin make up the majority of the magnetic particles in the clay sample.
A 2016 report in Science describes the discovery of impact ejecta from three marine P-E boundary sections from the Atlantic margin of the eastern U.S., indicating that an extraterrestrial impact occurred during the carbon isotope excursion at the P-E boundary. The silicate glass spherules found were identified as microtektites and microkrystites.
Burning of peat
The combustion of prodigious quantities of peat was once postulated, because there was probably a greater mass of carbon stored as living terrestrial biomass during the Paleocene than there is today since plants in fact grew more vigorously during the period of the PETM. This theory was refuted, because in order to produce the excursion observed, over 90 percent of the Earth's biomass would have to have been combusted. However, the Paleocene is also recognized as a time of significant peat accumulation worldwide. A comprehensive search failed to find evidence for the combustion of fossil organic matter, in the form of soot or similar particulate carbon.
Enhanced respiration
Respiration rates of organic matter increase when temperatures rise. One feedback mechanism proposed to explain the rapid rise in carbon dioxide levels is a sudden, speedy rise in terrestrial respiration rates concordant with global temperature rise initiated by any of the other causes of warming. Mathematical modelling supports increased organic matter oxidation as a viable explanation for observed isotopic excursions in carbon during the PETM's onset.
Terrestrial methane release
Release of methane from wetlands was a contributor to the PETM warming. Evidence for this comes from a decrease in hopanoids from mire sediments, likely reflecting increased wetland methanogenesis deeper within the mires.
Methane clathrate release
Methane hydrate dissolution has been invoked as a highly plausible causal mechanism for the carbon isotope excursion and warming observed at the PETM. The most obvious feedback mechanism that could amplify the initial perturbation is that of methane clathrates. Under certain temperature and pressure conditions, methane – which is being produced continually by decomposing microbes in sea bottom sediments – is stable in a complex with water, which forms ice-like cages trapping the methane in solid form. As temperature rises, the pressure required to keep this clathrate configuration stable increases, so shallow clathrates dissociate, releasing methane gas to make its way into the atmosphere. Since biogenic clathrates have a signature of −60 ‰ (inorganic clathrates are the still rather large −40 ‰), relatively small masses can produce large excursions. Further, methane is a potent greenhouse gas as it is released into the atmosphere, so it causes warming, and as the ocean transports this warmth to the bottom sediments, it destabilizes more clathrates.
In order for the clathrate hypothesis to be applicable to PETM, the oceans must show signs of having been warmer slightly before the carbon isotope excursion, because it would take some time for the methane to become mixed into the system and -reduced carbon to be returned to the deep ocean sedimentary record. Up until the 2000s, the evidence suggested that the two peaks were in fact simultaneous, weakening the support for the methane theory. In 2002, a short gap between the initial warming and the excursion was detected. In 2007, chemical markers of surface temperature (TEX86) had also indicated that warming occurred around 3,000 years before the carbon isotope excursion, although this did not seem to hold true for all cores. However, research in 2005 found no evidence of this time gap in the deeper (non-surface) waters. Moreover, the small apparent change in TEX86 that precede the anomaly can easily (and more plausibly) be ascribed to local variability (especially on the Atlantic coastal plain, e.g. Sluijs, et al., 2007) as the TEX86 paleo-thermometer is prone to significant biological effects. The of benthic or planktonic forams does not show any pre-warming in any of these localities, and in an ice-free world, it is generally a much more reliable indicator of past ocean temperatures. Analysis of these records reveals another interesting fact: planktonic (floating) forams record the shift to lighter isotope values earlier than benthic (bottom dwelling) forams. The lighter (lower ) methanogenic carbon can only be incorporated into foraminifer shells after it has been oxidised. A gradual release of the gas would allow it to be oxidised in the deep ocean, which would make benthic foraminifera show lighter values earlier. The fact that the planktonic foraminifera are the first to show the signal suggests that the methane was released so rapidly that its oxidation used up all the oxygen at depth in the water column, allowing some methane to reach the atmosphere unoxidised, where atmospheric oxygen would react with it. This observation also allows us to constrain the duration of methane release to under around 10,000 years.
However, there are several major problems with the methane hydrate dissociation hypothesis. The most parsimonious interpretation for surface-water foraminifera to show the excursion before their benthic counterparts (as in the Thomas et al. paper) is that the perturbation occurred from the top down, and not the bottom up. If the anomalous (in whatever form: CH4 or ) entered the atmospheric carbon reservoir first, and then diffused into the surface ocean waters, which mix with the deeper ocean waters over much longer time-scales, we would expect to observe the planktonics shifting toward lighter values before the benthics.
An additional critique of the methane clathrate release hypothesis is that the warming effects of large-scale methane release would not be sustainable for more than a millennium. Thus, exponents of this line of criticism suggest that methane clathrate release could not have been the main driver of the PETM, which lasted for 50,000 to 200,000 years.
There has been some debate about whether there was a large enough amount of methane hydrate to be a major carbon source; a 2011 paper proposed that was the case. The present-day global methane hydrate reserve was once considered to be between 2,000 and 10,000 Gt C (billions of tons of carbon), but is now estimated between 1500 and 2000 Gt C. However, because the global ocean bottom temperatures were ~6 °C higher than today, which implies a much smaller volume of sediment hosting gas hydrate than today, the global amount of hydrate before the PETM has been thought to be much less than present-day estimates. One study, however, suggests that because seawater oxygen content was lower, sufficient methane clathrate deposits could have been present to make them a viable mechanism for explaining the isotopic changes. In a 2006 study, scientists regarded the source of carbon for the PETM to be a mystery. A 2011 study, using numerical simulations suggests that enhanced organic carbon sedimentation and methanogenesis could have compensated for the smaller volume of hydrate stability. A 2016 study based on reconstructions of atmospheric content during the PETM's carbon isotope excursions (CIE), using triple oxygen isotope analysis, suggests a massive release of seabed methane into the atmosphere as the driver of climatic changes. The authors also state that a massive release of methane hydrates through thermal dissociation of methane hydrate deposits has been the most convincing hypothesis for explaining the CIE ever since it was first identified, according to them. In 2019, a study suggested that there was a global warming of around 2 degrees several millennia before PETM, and that this warming had eventually destabilized methane hydrates and caused the increased carbon emission during PETM, as evidenced by the large increase in barium ocean concentrations (since PETM-era hydrate deposits would have been also been rich in barium, and would have released it upon their meltdown). In 2022, a foraminiferal records study had reinforced this conclusion, suggesting that the release of CO2 before PETM was comparable to the current anthropogenic emissions in its rate and scope, to the point that there was enough time for a recovery to background levels of warming and ocean acidification in the centuries to millennia between the so-called pre-onset excursion (POE) and the main event (carbon isotope excursion, or CIE). A 2021 paper had further indicated that while PETM began with a significant intensification of volcanic activity and that lower-intensity volcanic activity sustained elevated carbon dioxide levels, "at least one other carbon reservoir released significant greenhouse gases in response to initial warming".
It was estimated in 2001 that it would take around 2,300 years for an increased temperature to diffuse warmth into the sea bed to a depth sufficient to cause a release of clathrates, although the exact time-frame is highly dependent on a number of poorly constrained assumptions. Ocean warming due to flooding and pressure changes due to a sea-level drop may have caused clathrates to become unstable and release methane. This can take place over as short of a period as a few thousand years. The reverse process, that of fixing methane in clathrates, occurs over a larger scale of tens of thousands of years.
Ocean circulation
The large scale patterns of ocean circulation are important when considering how heat was transported through the oceans. Our understanding of these patterns is still in a preliminary stage. Models show that there are possible mechanisms to quickly transport heat to the shallow, clathrate-containing ocean shelves, given the right bathymetric profile, but the models cannot yet match the distribution of data we observe. "Warming accompanying a south-to-north switch in deepwater formation would produce sufficient warming to destabilize seafloor gas hydrates over most of the world ocean to a water depth of at least 1900 m." This destabilization could have resulted in the release of more than 2000 gigatons of methane gas from the clathrate zone of the ocean floor. The timing of changes in ocean circulation with respect to the shift in carbon isotope ratios has been argued to support the proposition that warmer deep water caused methane hydrate release. However, a different study found no evidence of a change in deep water formation, instead suggesting that deepened subtropical subduction rather than subtropical deep water formation occurred during the PETM.
Arctic freshwater input into the North Pacific could serve as a catalyst for methane hydrate destabilization, an event suggested as a precursor to the onset of the PETM.
Recovery
Climate proxies, such as ocean sediments (depositional rates) indicate a duration of ~83 ka, with ~33 ka in the early rapid phase and ~50 ka in a subsequent gradual phase.
The most likely method of recovery involves an increase in biological productivity, transporting carbon to the deep ocean. This would be assisted by higher global temperatures and levels, as well as an increased nutrient supply (which would result from higher continental weathering due to higher temperatures and rainfall; volcanoes may have provided further nutrients). Evidence for higher biological productivity comes in the form of bio-concentrated barium. However, this proxy may instead reflect the addition of barium dissolved in methane. Diversifications suggest that productivity increased in near-shore environments, which would have been warm and fertilized by run-off, outweighing the reduction in productivity in the deep oceans. Large deposits in the Arctic Ocean floor of the aquatic fern Azolla in the middle Eocene (the "Azolla Event") may have been a contributory factor in the early stages of the end of the PETM by sequestering carbon in buried decayed Azolla. Another pulse of NAIP volcanic activity may have also played a role in terminating the hyperthermal via a volcanic winter.
Comparison with today's climate change
Since at least 1997, the PETM has been investigated in geoscience as an analogue to understand the effects of global warming and of massive carbon inputs to the ocean and atmosphere, including ocean acidification. A main difference is that during the PETM, the planet was ice-free, as the Drake Passage had not yet opened and the Central American Seaway had not yet closed. Although the PETM is now commonly held to be a "case study" for global warming and massive carbon emission, the cause, details, and overall significance of the event remain uncertain.
Rate of carbon addition
Carbon emissions during the PETM were more gradual relative to present-day anthropogenic emissions. Model simulations of peak carbon addition to the ocean–atmosphere system during the PETM give a probable range of 0.3–1.7 petagrams of carbon per year (Pg C/yr), which is much slower than the currently observed rate of carbon emissions. One petagram of carbon is equivalent to a gigaton of carbon (GtC); the current rate of carbon injection into the atmosphere is over 10 GtC/yr, a rate much greater than the carbon injection rate that occurred during the PETM. It has been suggested that today's methane emission regime from the ocean floor is potentially similar to that during the PETM. Because the modern rate of carbon release exceeds the PETM's, it is speculated the a PETM-like scenario is the best-case consequence of anthropogenic global warming, with a mass extinction of a magnitude similar to the Cretaceous-Palaeogene extinction event being a worst-case scenario.
Similarity of temperatures
Professor of Earth and planetary sciences James Zachos notes that IPCC projections for 2300 in the 'business-as-usual' scenario could "potentially bring global temperature to a level the planet has not seen in 50 million years" – during the early Eocene. Some have described the PETM as arguably the best ancient analog of modern climate change. Scientists have investigated effects of climate change on chemistry of the oceans by exploring oceanic changes during the PETM.
Tipping points
A study found that the PETM shows that substantial climate-shifting tipping points in the Earth system exist, which "can trigger release of additional carbon reservoirs and drive Earth's climate into a hotter state".
Climate sensitivity
Whether climate sensitivity was lower or higher during the PETM than today remains under debate. A 2022 study found that the Eurasian Epicontinental Sea acted as a major carbon sink during the PETM due to its high biological productivity and helped to slow and mitigate the warming, and that the existence of many large epicontinental seas at that time made the Earth's climate less sensitive to forcing by greenhouse gases relative to today, when much fewer epicontinental seas exist. Other research, however, suggests that climate sensitivity was higher during the PETM than today, meaning that sensitivity to greenhouse gas release increases the higher their concentration in the atmosphere.
| Physical sciences | Events | Earth science |
387457 | https://en.wikipedia.org/wiki/Thermohaline%20circulation | Thermohaline circulation | Thermohaline circulation (THC) is a part of the large-scale ocean circulation that is driven by global density gradients created by surface heat and freshwater fluxes. The adjective thermohaline derives from thermo- referring to temperature and referring to salt content, factors which together determine the density of sea water. Wind-driven surface currents (such as the Gulf Stream) travel polewards from the equatorial Atlantic Ocean, cooling en route, and eventually sinking at high latitudes (forming North Atlantic Deep Water). This dense water then flows into the ocean basins. While the bulk of it upwells in the Southern Ocean, the oldest waters (with a transit time of about 1000 years) upwell in the North Pacific. Extensive mixing therefore takes place between the ocean basins, reducing differences between them and making the Earth's oceans a global system. The water in these circuits transport both energy (in the form of heat) and mass (dissolved solids and gases) around the globe. As such, the state of the circulation has a large impact on the climate of the Earth.
The thermohaline circulation is sometimes called the ocean conveyor belt, the great ocean conveyor, or the global conveyor belt, coined by climate scientist Wallace Smith Broecker. It is also referred to as the meridional overturning circulation, or MOC. This name is used because not every circulation pattern caused by temperature and salinity gradients is necessarily part of a single global circulation. Further, it is difficult to separate the parts of the circulation driven by temperature and salinity alone from those driven by other factors, such as the wind and tidal forces.
This global circulation has two major limbs - Atlantic meridional overturning circulation (AMOC), centered in the north Atlantic Ocean, and Southern Ocean overturning circulation or Southern Ocean meridional circulation (SMOC), around Antarctica. Because 90% of the human population lives in the Northern Hemisphere, the AMOC has been far better studied, but both are very important for the global climate. Both of them also appear to be slowing down due to climate change, as the melting of the ice sheets dilutes salty flows such as the Antarctic bottom water. Either one could outright collapse to a much weaker state, which would be an example of tipping points in the climate system. The hemisphere which experiences the collapse of its circulation would experience less precipitation and become drier, while the other hemisphere would become wetter. Marine ecosystems are also likely to receive fewer nutrients and experience greater ocean deoxygenation. In the Northern Hemisphere, AMOC's collapse would also substantially lower the temperatures in many European countries, while the east coast of North America would experience accelerated sea level rise. The collapse of either circulation is generally believed to be more than a century away and may only occur under high warming, but there is a lot of uncertainty about these projections.
History of research
It has long been known that wind can drive ocean currents, but only at the surface. In the 19th century, some oceanographers suggested that the convection of heat could drive deeper currents. In 1908, Johan Sandström performed a series of experiments at a Bornö Marine Research Station which proved that the currents driven by thermal energy transfer can exist, but require that "heating occurs at a greater depth than cooling". Normally, the opposite occurs, because ocean water is heated from above by the Sun and becomes less dense, so the surface layer floats on the surface above the cooler, denser layers, resulting in ocean stratification. However, wind and tides cause mixing between these water layers, with diapycnal mixing caused by tidal currents being one example. This mixing is what enables the convection between ocean layers, and thus, deep water currents.
In the 1920s, Sandström's framework was expanded by accounting for the role of salinity in ocean layer formation. Salinity is important because like temperature, it affects water density. Water becomes less dense as its temperature increases and the distance between its molecules expands, but more dense as the salinity increases, since there is a larger mass of salts dissolved within that water. Further, while fresh water is at its most dense at 4 °C, seawater only gets denser as it cools, up until it reaches the freezing point. That freezing point is also lower than for fresh water due to salinity, and can be below −2 °C, depending on salinity and pressure.
Structure
These density differences caused by temperature and salinity ultimately separate ocean water into distinct water masses, such as the North Atlantic Deep Water (NADW) and Antarctic Bottom Water (AABW). These two waters are the main drivers of the circulation, which was established in 1960 by Henry Stommel and Arnold B. Arons. They have chemical, temperature and isotopic ratio signatures (such as 231Pa / 230Th ratios) which can be traced, their flow rate calculated, and their age determined. NADW is formed because North Atlantic is a rare place in the ocean where precipitation, which adds fresh water to the ocean and so reduces its salinity, is outweighed by evaporation, in part due to high windiness. When water evaporates, it leaves salt behind, and so the surface waters of the North Atlantic are particularly salty. North Atlantic is also an already cool region, and evaporative cooling reduces water temperature even further. Thus, this water sinks downward in the Norwegian Sea, fills the Arctic Ocean Basin and spills southwards through the Greenland-Scotland-Ridge – crevasses in the submarine sills that connect Greenland, Iceland and Great Britain. It cannot flow towards the Pacific Ocean due to the narrow shallows of the Bering Strait, but it does slowly flow into the deep abyssal plains of the south Atlantic.
In the Southern Ocean, strong katabatic winds blowing from the Antarctic continent onto the ice shelves will blow the newly formed sea ice away, opening polynyas in locations such as Weddell and Ross Seas, off the Adélie Coast and by Cape Darnley. The ocean, no longer protected by sea ice, suffers a brutal and strong cooling (see polynya). Meanwhile, sea ice starts reforming, so the surface waters also get saltier, hence very dense. In fact, the formation of sea ice contributes to an increase in surface seawater salinity; saltier brine is left behind as the sea ice forms around it (pure water preferentially being frozen). Increasing salinity lowers the freezing point of seawater, so cold liquid brine is formed in inclusions within a honeycomb of ice. The brine progressively melts the ice just beneath it, eventually dripping out of the ice matrix and sinking. This process is known as brine rejection. The resulting Antarctic bottom water sinks and flows north and east. It is denser than the NADW, and so flows beneath it. AABW formed in the Weddell Sea will mainly fill the Atlantic and Indian Basins, whereas the AABW formed in the Ross Sea will flow towards the Pacific Ocean. At the Indian Ocean, a vertical exchange of a lower layer of cold and salty water from the Atlantic and the warmer and fresher upper ocean water from the tropical Pacific occurs, in what is known as overturning. In the Pacific Ocean, the rest of the cold and salty water from the Atlantic undergoes haline forcing, and becomes warmer and fresher more quickly.
The out-flowing undersea of cold and salty water makes the sea level of the Atlantic slightly lower than the Pacific and salinity or halinity of water at the Atlantic higher than the Pacific. This generates a large but slow flow of warmer and fresher upper ocean water from the tropical Pacific to the Indian Ocean through the Indonesian Archipelago to replace the cold and salty Antarctic Bottom Water. This is also known as 'haline forcing' (net high latitude freshwater gain and low latitude evaporation). This warmer, fresher water from the Pacific flows up through the South Atlantic to Greenland, where it cools off and undergoes evaporative cooling and sinks to the ocean floor, providing a continuous thermohaline circulation.
Upwelling
As the deep waters sink into the ocean basins, they displace the older deep-water masses, which gradually become less dense due to continued ocean mixing. Thus, some water is rising, in what is known as upwelling. Its speeds are very slow even compared to the movement of the bottom water masses. It is therefore difficult to measure where upwelling occurs using current speeds, given all the other wind-driven processes going on in the surface ocean. Deep waters have their own chemical signature, formed from the breakdown of particulate matter falling into them over the course of their long journey at depth. A number of scientists have tried to use these tracers to infer where the upwelling occurs. Wallace Broecker, using box models, has asserted that the bulk of deep upwelling occurs in the North Pacific, using as evidence the high values of silicon found in these waters. Other investigators have not found such clear evidence.
Computer models of ocean circulation increasingly place most of the deep upwelling in the Southern Ocean, associated with the strong winds in the open latitudes between South America and Antarctica.
Direct estimates of the strength of the thermohaline circulation have also been made at 26.5°N in the North Atlantic, by the UK-US RAPID programme. It combines direct estimates of ocean transport using current meters and subsea cable measurements with estimates of the geostrophic current from temperature and salinity measurements to provide continuous, full-depth, basin-wide estimates of the meridional overturning circulation. However, it has only been operating since 2004, which is too short when the timescale of the circulation is measured in centuries.
Effects on global climate
The thermohaline circulation plays an important role in supplying heat to the polar regions, and thus in regulating the amount of sea ice in these regions, although poleward heat transport outside the tropics is considerably larger in the atmosphere than in the ocean. Changes in the thermohaline circulation are thought to have significant impacts on the Earth's radiation budget.
Large influxes of low-density meltwater from Lake Agassiz and deglaciation in North America are thought to have led to a shifting of deep water formation and subsidence in the extreme North Atlantic and caused the climate period in Europe known as the Younger Dryas.
Slowdown or collapse of AMOC
Slowdown or collapse of SMOC
| Physical sciences | Oceanography | null |
387512 | https://en.wikipedia.org/wiki/Dye%20laser | Dye laser | A dye laser is a laser that uses an organic dye as the lasing medium, usually as a liquid solution. Compared to gases and most solid state lasing media, a dye can usually be used for a much wider range of wavelengths, often spanning 50 to 100 nanometers or more. The wide bandwidth makes them particularly suitable for tunable lasers and pulsed lasers. The dye rhodamine 6G, for example, can be tuned from 635 nm (orangish-red) to 560 nm (greenish-yellow), and produce pulses as short as 16 femtoseconds. Moreover, the dye can be replaced by another type in order to generate an even broader range of wavelengths with the same laser, from the near-infrared to the near-ultraviolet, although this usually requires replacing other optical components in the laser as well, such as dielectric mirrors or pump lasers.
Dye lasers were independently discovered by P. P. Sorokin and F. P. Schäfer (and colleagues) in 1966.
In addition to the usual liquid state, dye lasers are also available as solid state dye lasers (SSDL). These SSDL lasers use dye-doped organic matrices as gain medium.
Construction
A dye laser uses a gain medium consisting of an organic dye, which is a carbon-based, soluble stain that is often fluorescent, such as the dye in a highlighter pen. The dye is mixed with a compatible solvent, allowing the molecules to diffuse evenly throughout the liquid. The dye solution may be circulated through a dye cell, or streamed through open air using a dye jet. A high energy source of light is needed to 'pump' the liquid beyond its lasing threshold. A fast discharge flashtube or an external laser is usually used for this purpose. Mirrors are also needed to oscillate the light produced by the dye's fluorescence, which is amplified with each pass through the liquid. The output mirror is normally around 80% reflective, while all other mirrors are usually more than 99.9% reflective. The dye solution is usually circulated at high speeds, to help avoid triplet absorption and to decrease degradation of the dye. A prism or diffraction grating is usually mounted in the beam path, to allow tuning of the beam.
Because the liquid medium of a dye laser can fit any shape, there are a multitude of different configurations that can be used. A Fabry–Pérot laser cavity is usually used for flashtube pumped lasers, which consists of two mirrors, which may be flat or curved, mounted parallel to each other with the laser medium in between. The dye cell is often a thin tube approximately equal in length to the flashtube, with both windows and an inlet/outlet for the liquid on each end. The dye cell is usually side-pumped, with one or more flashtubes running parallel to the dye cell in a reflector cavity. The reflector cavity is often water cooled, to prevent thermal shock in the dye caused by the large amounts of near-infrared radiation which the flashtube produces. Axial pumped lasers have a hollow, annular-shaped flashtube that surrounds the dye cell, which has lower inductance for a shorter flash, and improved transfer efficiency. Coaxial pumped lasers have an annular dye cell that surrounds the flashtube, for even better transfer efficiency, but have a lower gain due to diffraction losses. Flash pumped lasers can be used only for pulsed output applications.
A ring laser design is often chosen for continuous operation, although a Fabry–Pérot design is sometimes used. In a ring laser, the mirrors of the laser are positioned to allow the beam to travel in a circular path. The dye cell, or cuvette, is usually very small. Sometimes a dye jet is used to help avoid reflection losses. The dye is usually pumped with an external laser, such as a nitrogen, excimer, or frequency doubled Nd:YAG laser. The liquid is circulated at very high speeds, to prevent triplet absorption from cutting off the beam. Unlike Fabry–Pérot cavities, a ring laser does not generate standing waves which cause spatial hole burning, a phenomenon where energy becomes trapped in unused portions of the medium between the crests of the wave. This leads to a better gain from the lasing medium.
Operation
The dyes used in these lasers contain rather large organic molecules which fluoresce. Most dyes have a very short time between the absorption and emission of light, referred to as the fluorescence lifetime, which is often on the order of a few nanoseconds. (In comparison, most solid-state lasers have a fluorescence lifetime ranging from hundreds of microseconds to a few milliseconds.) Under standard laser-pumping conditions, the molecules emit their energy before a population inversion can properly build up, so dyes require rather specialized means of pumping. Liquid dyes have an extremely high lasing threshold. In addition, the large molecules are subject to complex excited state transitions during which the spin can be "flipped", quickly changing from the useful, fast-emitting "singlet" state to the slower "triplet" state.
The incoming light excites the dye molecules into the state of being ready to emit stimulated radiation; the singlet state. In this state, the molecules emit light via fluorescence, and the dye is transparent to the lasing wavelength. Within a microsecond or less, the molecules will change to their triplet state. In the triplet state, light is emitted via phosphorescence, and the molecules absorb the lasing wavelength, making the dye partially opaque. Flashlamp-pumped lasers need a flash with an extremely short duration, to deliver the large amounts of energy necessary to bring the dye past threshold before triplet absorption overcomes singlet emission. Dye lasers with an external pump-laser can direct enough energy of the proper wavelength into the dye with a relatively small amount of input energy, but the dye must be circulated at high speeds to keep the triplet molecules out of the beam path. Due to their high absorption, the pumping energy may often be concentrated into a rather small volume of liquid.
Since organic dyes tend to decompose under the influence of light, the dye solution is normally circulated from a large reservoir. The dye solution can be flowing through a cuvette, i.e., a glass container, or be as a dye jet, i.e., as a sheet-like stream in open air from a specially-shaped nozzle. With a dye jet, one avoids reflection losses from the glass surfaces and contamination of the walls of the cuvette. These advantages come at the cost of a more-complicated alignment.
Liquid dyes have very high gain as laser media. The beam needs to make only a few passes through the liquid to reach full design power, and hence, the high transmittance of the output coupler. The high gain also leads to high losses, because reflections from the dye-cell walls or flashlamp reflector cause parasitic oscillations, dramatically reducing the amount of energy available to the beam. Pump cavities are often coated, anodized, or otherwise made of a material that will not reflect at the lasing wavelength while reflecting at the pump wavelength.
A benefit of organic dyes is their high fluorescence efficiency. The greatest losses in many lasers and other fluorescence devices is not from the transfer efficiency (absorbed versus reflected/transmitted energy) or quantum yield (emitted number of photons per absorbed number), but from the losses when high-energy photons are absorbed and reemitted as photons of longer wavelengths. Because the energy of a photon is determined by its wavelength, the emitted photons will be of lower energy; a phenomenon called the Stokes shift. The absorption centers of many dyes are very close to the emission centers. Sometimes the two are close enough that the absorption profile slightly overlaps the emission profile. As a result, most dyes exhibit very small Stokes shifts and consequently allow for lower energy losses than many other laser types due to this phenomenon. The wide absorption profiles make them particularly suited to broadband pumping, such as from a flashtube. It also allows a wide range of pump lasers to be used for any certain dye and, conversely, many different dyes can be used with a single pump laser.
CW dye lasers
Continuous-wave (CW) dye lasers often use a dye jet. CW dye-lasers can have a linear or a ring cavity, and provided the foundation for the development of femtosecond lasers.
Narrow linewidth dye lasers
Dye lasers' emission is inherently broad. However, tunable narrow linewidth emission has been central to the success of the dye laser. In order to produce narrow bandwidth tuning these lasers use many types of cavities and resonators which include gratings, prisms, multiple-prism grating arrangements, and etalons.
The first narrow linewidth dye laser, introduced by Hänsch, used a Galilean telescope as beam expander to illuminate the diffraction grating. Next were the grazing-incidence grating designs and the multiple-prism grating configurations. The various resonators and oscillator designs developed for dye lasers have been successfully adapted to other laser types such as the diode laser. The physics of narrow-linewidth multiple-prism grating lasers was explained by Duarte and Piper.
Chemicals used
Some of the laser dyes are rhodamine (orange, 540–680 nm), fluorescein (green, 530–560 nm), coumarin (blue 490–620 nm), stilbene (violet 410–480 nm), umbelliferone (blue, 450–470 nm), tetracene, malachite green, and others. While some dyes are actually used in food coloring, most dyes are very toxic, and often carcinogenic. Many dyes, such as rhodamine 6G, (in its chloride form), can be very corrosive to all metals except stainless steel. Although dyes have very broad fluorescence spectra, the dye's absorption and emission will tend to center on a certain wavelength and taper off to each side, forming a tunability curve, with the absorption center being of a shorter wavelength than the emission center. Rhodamine 6G, for example, has its highest output around 590 nm, and the conversion efficiency lowers as the laser is tuned to either side of this wavelength.
A wide variety of solvents can be used, although most dyes will dissolve better in some solvents than in others. Some of the solvents used are water, glycol, ethanol, methanol, hexane, cyclohexane, cyclodextrin, and many others. Solvents can be highly toxic, and can sometimes be absorbed directly through the skin, or through inhaled vapors. Many solvents are also extremely flammable. The various solvents can also have an effect on the specific color of the dye solution, the lifetime of the singlet state, either enhancing or quenching the triplet state, and, thus, on the lasing bandwidth and power obtainable with a particular laser-pumping source.
Adamantane is added to some dyes to prolong their life.
Cycloheptatriene and cyclooctatetraene (COT) can be added as triplet quenchers for rhodamine G, increasing the laser output power. Output power of 1.4 kilowatt at 585 nm was achieved using Rhodamine 6G with COT in methanol-water solution.
Excitation lasers
Flashlamps and several types of lasers can be used to optically pump dye lasers. A partial list of excitation lasers include:
Copper vapor lasers
Diode lasers
Excimer lasers
Nd:YAG lasers (mainly second and third harmonics)
Nitrogen lasers
Ruby lasers
Argon ion lasers in the CW regime
Krypton ion lasers in the CW regime
Ultra-short optical pulses
R. L. Fork, B. I. Greene, and C. V. Shank demonstrated, in 1981, the generation of ultra-short laser pulse using a ring-dye laser (or dye laser exploiting colliding pulse mode-locking). This kind of laser is capable of generating laser pulses of ~ 0.1 ps duration.
The introduction of grating techniques and intra-cavity prismatic pulse compressors eventually resulted in the routine emission of femtosecond dye laser pulses.
Applications
Dye lasers are very versatile. In addition to their recognized wavelength agility these lasers can offer very large pulsed energies or very high average powers. Flashlamp-pumped dye lasers have been shown to yield hundreds of Joules per pulse and copper-laser-pumped dye lasers are known to yield average powers in the kilowatt regime.
Dye lasers are used in many applications including:
astronomy (as laser guide stars),
atomic vapor laser isotope separation
manufacturing
medicine
spectroscopy
In laser medicine these lasers are applied in several areas, including dermatology where they are used to make skin tone more even. The wide range of wavelengths possible allows very close matching to the absorption lines of certain tissues, such as melanin or hemoglobin, while the narrow bandwidth obtainable helps reduce the possibility of damage to the surrounding tissue. They are used to treat port-wine stains and other blood vessel disorders, scars and kidney stones. They can be matched to a variety of inks for tattoo removal, as well as a number of other applications.
In spectroscopy, dye lasers can be used to study the absorption and emission spectra of various materials. Their tunability, (from the near-infrared to the near-ultraviolet), narrow bandwidth, and high intensity allows a much greater diversity than other light sources. The variety of pulse widths, from ultra-short, femtosecond pulses to continuous-wave operation, makes them suitable for a wide range of applications, from the study of fluorescent lifetimes and semiconductor properties to lunar laser ranging experiments.
Tunable lasers are used in swept-frequency metrology to enable measurement of absolute distances with very high accuracy. A two axis interferometer is set up and by sweeping the frequency, the frequency of the light returning from the fixed arm is slightly different from the frequency returning from the distance measuring arm. This produces a beat frequency which can be detected and used to determine the absolute difference between the lengths of the two arms.
| Technology | Lasers | null |
387750 | https://en.wikipedia.org/wiki/Separation%20of%20variables | Separation of variables | In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.
Ordinary differential equations (ODE)
A differential equation for the unknown will be separable if it can be written in the form
where and are given functions. This is perhaps more transparent when written using as:
So now as long as h(y) ≠ 0, we can rearrange terms to obtain:
where the two variables x and y have been separated. Note dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential (infinitesimal) is somewhat advanced.
Alternative notation
Those who dislike Leibniz's notation may prefer to write this as
but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect to , we have
or equivalently,
because of the substitution rule for integrals.
If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat the derivative as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.
(Note that we do not need to use two constants of integration, in equation () as in
because a single constant is equivalent.)
Example
Population growth is often modeled by the "logistic" differential equation
where is the population with respect to time , is the rate of growth, and is the carrying capacity of the environment.
Separation of variables now leads to
which is readily integrated using partial fractions on the left side yielding
where A is the constant of integration. We can find in terms of at t=0. Noting we get
Generalization of separable ODEs to the nth order
Much like one can speak of a separable first-order ODE, one can speak of a separable second-order, third-order or nth-order ODE. Consider the separable first-order ODE:
The derivative can alternatively be written the following way to underscore that it is an operator working on the unknown function, y:
Thus, when one separates variables for first-order equations, one in fact moves the dx denominator of the operator to the side with the x variable, and the d(y) is left on the side with the y variable. The second-derivative operator, by analogy, breaks down as follows:
The third-, fourth- and nth-derivative operators break down in the same way. Thus, much like a first-order separable ODE is reducible to the form
a separable second-order ODE is reducible to the form
and an nth-order separable ODE is reducible to
Example
Consider the simple nonlinear second-order differential equation:This equation is an equation only of y'' and y', meaning it is reducible to the general form described above and is, therefore, separable. Since it is a second-order separable equation, collect all x variables on one side and all y' variables on the other to get:Now, integrate the right side with respect to x and the left with respect to y''':This giveswhich simplifies to:This is now a simple integral problem that gives the final answer:
Partial differential equations
The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as the heat equation, wave equation, Laplace equation, Helmholtz equation and biharmonic equation.
The analytical method of separation of variables for solving partial differential equations has also been generalized into a computational method of decomposition in invariant structures that can be used to solve systems of partial differential equations.
Example: homogeneous case
Consider the one-dimensional heat equation. The equation is
The variable u denotes temperature. The boundary condition is homogeneous, that is
Let us attempt to find a solution which is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is:
Substituting u back into equation and using the product rule,
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus:
and
−λ here is the eigenvalue for both differential operators, and T(t) and X(x) are corresponding eigenfunctions.
We will now show that solutions for X(x) for values of λ ≤ 0 cannot occur:
Suppose that λ < 0. Then there exist real numbers B, C such that
From we get
and therefore B = 0 = C which implies u is identically 0.
Suppose that λ = 0. Then there exist real numbers B, C such that
From we conclude in the same manner as in 1 that u is identically 0.
Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that
and
From we get C = 0 and that for some positive integer n,
This solves the heat equation in the special case that the dependence of u has the special form of .
In general, the sum of solutions to which satisfy the boundary conditions also satisfies and . Hence a complete solution can be given as
where Dn are coefficients determined by initial condition.
Given the initial condition
we can get
This is the sine series expansion of f(x) which is amenable to Fourier analysis. Multiplying both sides with and integrating over results in
This method requires that the eigenfunctions X, here , are orthogonal and complete. In general this is guaranteed by Sturm–Liouville theory.
Example: nonhomogeneous case
Suppose the equation is nonhomogeneous,
with the boundary condition the same as .
Expand h(x,t), u(x,t) and f(x) into
where hn(t) and bn can be calculated by integration, while un(t) is to be determined.
Substitute and back to and considering the orthogonality of sine functions we get
which are a sequence of linear differential equations that can be readily solved with, for instance, Laplace transform, or Integrating factor. Finally, we can get
If the boundary condition is nonhomogeneous, then the expansion of and is no longer valid. One has to find a function v that satisfies the boundary condition only, and subtract it from u. The function u-v then satisfies homogeneous boundary condition, and can be solved with the above method.
Example: mixed derivatives
For some equations involving mixed derivatives, the equation does not separate as easily as the heat equation did in the first example above, but nonetheless separation of variables may still be applied. Consider the two-dimensional biharmonic equation
Proceeding in the usual manner, we look for solutions of the form
and we obtain the equation
Writing this equation in the form
Taking the derivative of this expression with respect to gives which means or and likewise, taking derivative with respect to leads to and thus or , hence either F(x) or G(y) must be a constant, say −λ. This further implies that either or are constant. Returning to the equation for X and Y, we have two cases
and
which can each be solved by considering the separate cases for and noting that .
Curvilinear coordinates
In orthogonal curvilinear coordinates, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. See spherical harmonics for example.
Applicability
Partial differential equations
For many PDEs, such as the wave equation, Helmholtz equation and Schrödinger equation, the applicability of separation of variables is a result of the spectral theorem. In some cases, separation of variables may not be possible. Separation of variables may be possible in some coordinate systems but not others, and which coordinate systems allow for separation depends on the symmetry properties of the equation. Below is an outline of an argument demonstrating the applicability of the method to certain linear equations, although the precise method may differ in individual cases (for instance in the biharmonic equation above).
Consider an initial boundary value problem for a function on in two variables:
where is a differential operator with respect to and is a differential operator with respect to with boundary data:
for
for
where is a known function.
We look for solutions of the form . Dividing the PDE through by gives
The right hand side depends only on and the left hand side only on so both must be equal to a constant , which gives two ordinary differential equations
which we can recognize as eigenvalue problems for the operators for and . If is a compact, self-adjoint operator on the space along with the relevant boundary conditions, then by the Spectral theorem there exists a basis for consisting of eigenfunctions for . Let the spectrum of be and let be an eigenfunction with eigenvalue . Then for any function which at each time is square-integrable with respect to , we can write this function as a linear combination of the . In particular, we know the solution can be written as
For some functions . In the separation of variables, these functions are given by solutions to
Hence, the spectral theorem ensures that the separation of variables will (when it is possible) find all the solutions.
For many differential operators, such as , we can show that they are self-adjoint by integration by parts. While these operators may not be compact, their inverses (when they exist) may be, as in the case of the wave equation, and these inverses have the same eigenfunctions and eigenvalues as the original operator (with the possible exception of zero).
Matrices
The matrix form of the separation of variables is the Kronecker sum.
As an example we consider the 2D discrete Laplacian on a regular grid:
where and are 1D discrete Laplacians in the x- and y''-directions, correspondingly, and are the identities of appropriate sizes. See the main article Kronecker sum of discrete Laplacians for details.
Software
Some mathematical programs are able to do separation of variables: Xcas among others.
| Mathematics | Differential equations | null |
387797 | https://en.wikipedia.org/wiki/Asphalt%20concrete | Asphalt concrete | Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac or bitumen macadam in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, and the core of embankment dams. Asphalt mixtures have been used in pavement construction since the nineteenth century. It consists of mineral aggregate bound together with bitumen (a substance also independently known as asphalt, Pitch, or Tar), laid in layers, and compacted.
The American English terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
History
Natural asphalt (Ancient Greek: ἄσφαλτος (ásphaltos)) has been known of and used since antiquity, in Mesopotamia, Phoenicia, Egypt, Babylon, Greece, Carthage, and Rome, to waterproof temple baths, reservoirs, aqueducts, tunnels, and moats, as a masonry mortar, to cork vessels, and surface roads. The Procession Street of Babylonian King Nabopolassar, c. 625 BC, leading north from his palace through the city's wall, being described as being constructed from burnt brick and asphalt. Natural asphalt covered and bonded cobbles were used from 1824, in France, as a means to construct roads. In 1829 natural Seyssel asphalt mixed with 7% aggregate, to create an asphat-mastic surface was used for a footpath at Pont Morand, Lyons, France, the technique spreading to Paris in 1835, London, England, in 1836, and the Philadelphia, USA, in 1838. A two mile stretch of a gravel constructed road, running out of Nottingham, and Huntingdon High Street, were experimentally covered is natural asphalt during the 1840s. The first Macadam road surfaced with asphalt was constructed in 1852, between Paris and Perpignan, France, using Swiss Val de Travers rock asphalt (natural asphalt covered limestone aggregate). In 1869, Threadneedle Street, in London, England was resurfaced with Swiss Val de Travers rock asphalt. A process to surface a packed sand road through application of heated natural asphalt mixed with sand, in a ratio of 1:5, rolling, and hardened through the application of natural asphalt mixed with a petroleum oil, was invented by Belgian-American chemist Edward De Smedt, at Columbia University, in 1870, obtaining a pair of U.S. patents for the material and method of hardening. Civil Engineer, Surveyor, and an English county Highway board member, Edgar Purnell Hooley created a process and engine to combine a synthetic, refined petroleum tar, and resin, with Macadam aggregates (gravel, portland cement, crushed rocks, and blast furnace slag) in a steam heated mixer, at 212 °F, and through a heated reservoir, conduits, and meshes, create a machine and material that can be applied to form a road surface, filing a UK patent, in 1902, for his improvement. Hooley founding a UK company to market the technology, where the term tar macadam, shortened to tarmac was coined, after the name of his companyTar Macadam (Purnell Hooley's Patent) Syndicate Limited, derived from the combination of tar and Macadam gravel composite mixtures.
Mixture formulations
Mixing of asphalt and aggregate is accomplished in one of several ways:
Hot-mix asphalt concrete (commonly abbreviated as HMA) This is produced by heating the asphalt binder to decrease its viscosity and drying the aggregate to remove moisture from it prior to mixing. Mixing is generally performed with the aggregate at about for virgin asphalt and for polymer modified asphalt, and the asphalt cement at . Paving and compaction must be performed while the asphalt is sufficiently hot. In many locales paving is restricted to summer months because in winter the base will cool the asphalt too quickly before it can be packed to the required density. HMA is the form of asphalt concrete most commonly used on high traffic pavements such as those on major highways, racetracks and airfields. It is also used as an environmental liner for landfills, reservoirs, and fish hatchery ponds.
Warm-mix asphalt concrete (commonly abbreviated as WMA) This is produced by adding either zeolites, waxes, asphalt emulsions or sometimes water to the asphalt binder prior to mixing. This allows significantly lower mixing and laying temperatures and results in lower consumption of fossil fuels, thus releasing less carbon dioxide, aerosols and vapors. This improves working conditions, and lowers laying-temperature, which leads to more rapid availability of the surface for use, which is important for construction sites with critical time schedules. The usage of these additives in hot-mixed asphalt (above) may afford easier compaction and allow cold-weather paving or longer hauls. Use of warm mix is rapidly expanding. A survey of US asphalt producers found that nearly 25% of asphalt produced in 2012 was warm mix, a 416% increase since 2009. Cleaner road pavements can be potentially developed by combining WMA and material recycling. Warm Mix Asphalt (WMA) technology has environmental, production, and economic benefits.
Cold-mix asphalt concrete This is produced by emulsifying the asphalt in water with an emulsifying agent before mixing with the aggregate. While in its emulsified state, the asphalt is less viscous and the mixture is easy to work and compact. The emulsion will break after enough water evaporates and the cold mix will, ideally, take on the properties of an HMA pavement. Cold mix is commonly used as a patching material and on lesser-trafficked service roads.
Cut-back asphalt concrete Is a form of cold mix asphalt produced by dissolving the binder in kerosene or another lighter fraction of petroleum before mixing with the aggregate. While in its dissolved state, the asphalt is less viscous and the mix is easy to work and compact. After the mix is laid down the lighter fraction evaporates. Because of concerns with pollution from the volatile organic compounds in the lighter fraction, cut-back asphalt has been largely replaced by asphalt emulsion.
Mastic asphalt concrete, or sheet asphalt This is produced by heating hard grade blown bitumen (i.e., partly oxidised) in a green cooker (mixer) until it has become a viscous liquid after which the aggregate mix is then added. The bitumen aggregate mixture is cooked (matured) for around 6–8 hours and once it is ready, the mastic asphalt mixer is transported to the work site where experienced layers empty the mixer and either machine or hand lay the mastic asphalt contents on to the road. Mastic asphalt concrete is generally laid to a thickness of around for footpath and road applications and around for flooring or roof applications.
High-modulus asphalt concrete, sometimes referred to by the French-language acronym EMÉ (enrobé à module élevé) This uses a very hard bituminous formulation (penetration 10/20), sometimes modified, in proportions close to 6% by weight of the aggregates, as well as a high proportion of mineral powder (between 8–10%) to create an asphalt concrete layer with a high modulus of elasticity (of the order of 13000MPa). This makes it possible to reduce the thickness of the base layer up to 25% (depending on the temperature) in relation to conventional bitumen, while offering as very high fatigue strengths. High-modulus asphalt layers are used both in reinforcement operations and in the construction of new reinforcements for medium and heavy traffic. In base layers, they tend to exhibit a greater capacity of absorbing tensions and, in general, better fatigue resistance.
In addition to the asphalt and aggregate, additives, such as polymers, and antistripping agents may be added to improve the properties of the final product.
Areas paved with asphalt concrete—especially airport aprons—have been called "the tarmac" at times, despite not being constructed using the tarmacadam process.
A variety of specialty asphalt concrete mixtures have been developed to meet specific needs, such as stone-matrix asphalt, which is designed to ensure a strong wearing surface, or porous asphalt pavements, which are permeable and allow water to drain through the pavement for controlling storm water.
Roadway performance characteristics
Different types of asphalt concrete have different performance characteristics in roads in terms of surface durability, tire wear, braking efficiency and roadway noise. In principle, the determination of appropriate asphalt performance characteristics must take into account the volume of traffic in each vehicle category, and the performance requirements of the friction course. In general, the viscosity of asphalt allows it to conveniently form a convex surface, and a central apex to streets and roads to drain water to the edges. This is not, however, in itself an advantage over concrete, which has various grades of viscosity and can be formed into a convex road surface. Rather, it is the economy of asphalt concrete that renders it more frequently used. Concrete is found on interstate highways where maintenance is highly crucial.
Asphalt concrete generates less roadway noise than a Portland cement concrete surface, and is typically less noisy than chip seal surfaces. Because tire noise is generated through the conversion of kinetic energy to sound waves, more noise is produced as the speed of a vehicle increases. The notion that highway design might take into account acoustical engineering considerations, including the selection of the type of surface paving, arose in the early 1970s.
With regard to structural performance, the asphalt behaviour depends on a variety of factors including the material, loading and environmental condition. Furthermore, the performance of pavement varies over time. Therefore, the long-term behaviour of asphalt pavement is different from its short-term performance. The LTPP is a research program by the FHWA, which is specifically focusing on long-term pavement behaviour.
Degradation and restoration
Asphalt deterioration can include crocodile cracking, potholes, upheaval, raveling, bleeding, rutting, shoving, stripping, and grade depressions. In cold climates, frost heaves can crack asphalt even in one winter. Filling the cracks with bitumen is a temporary fix, but only proper compaction and drainage can slow this process.
Factors that cause asphalt concrete to deteriorate over time mostly fall into one of three categories: construction quality, environmental considerations, and traffic loads. Often, damage results from combinations of factors in all three categories.
Construction quality is critical to pavement performance. This includes the construction of utility trenches and appurtenances that are placed in the pavement after construction. Lack of compaction in the surface of the asphalt, especially on the longitudinal joint, can reduce the life of a pavement by 30 to 40%. Service trenches in pavements after construction have been said to reduce the life of the pavement by 50%, mainly due to the lack of compaction in the trench, and also because of water intrusion through improperly sealed joints.
Environmental factors include heat and cold, the presence of water in the subbase or subgrade soil underlying the pavement, and frost heaves.
High temperatures soften the asphalt binder, allowing heavy tire loads to deform the pavement into ruts. Paradoxically, high heat and strong sunlight also cause the asphalt to oxidize, becoming stiffer and less resilient, leading to crack formation. Cold temperatures can cause cracks as the asphalt contracts. Cold asphalt is also less resilient and more vulnerable to cracking.
Water trapped under the pavement softens the subbase and subgrade, making the road more vulnerable to traffic loads. Water under the road freezes and expands in cold weather, causing and enlarging cracks. In spring thaw, the ground thaws from the top down, so water is trapped between the pavement above and the still-frozen soil underneath. This layer of saturated soil provides little support for the road above, leading to the formation of potholes. This is more of a problem for silty or clay soils than sandy or gravelly soils. Some jurisdictions pass frost laws to reduce the allowable weight of trucks during the spring thaw season and protect their roads.
The damage a vehicle causes is roughly proportional to the axle load raised to the fourth power, so doubling the weight an axle carries actually causes 16 times as much damage. Wheels cause the road to flex slightly, resulting in fatigue cracking, which often leads to crocodile cracking. Vehicle speed also plays a role. Slowly moving vehicles stress the road over a longer period of time, increasing ruts, cracking, and corrugations in the asphalt pavement.
Other causes of damage include heat damage from vehicle fires, or solvent action from chemical spills.
Prevention and repair of degradation
The life of a road can be prolonged through good design, construction and maintenance practices. During design, engineers measure the traffic on a road, paying special attention to the number and types of trucks. They also evaluate the subsoil to see how much load it can withstand. The pavement and subbase thicknesses are designed to withstand the wheel loads. Sometimes, geogrids are used to reinforce the subbase and further strengthen the roads. Drainage, including ditches, storm drains and underdrains are used to remove water from the roadbed, preventing it from weakening the subbase and subsoil.
Sealcoating asphalt is a maintenance measure that helps keep water and petroleum products out of the pavement.
Maintaining and cleaning ditches and storm drains will extend the life of the road at low cost. Sealing small cracks with bituminous crack sealer prevents water from enlarging cracks through frost weathering, or percolating down to the subbase and softening it.
For somewhat more distressed roads, a chip seal or similar surface treatment may be applied. As the number, width and length of cracks increases, more intensive repairs are needed. In order of generally increasing expense, these include thin asphalt overlays, multicourse overlays, grinding off the top course and overlaying, in-place recycling, or full-depth reconstruction of the roadway.
It is far less expensive to keep a road in good condition than it is to repair it once it has deteriorated. This is why some agencies place the priority on preventive maintenance of roads in good condition, rather than reconstructing roads in poor condition. Poor roads are upgraded as resources and budget allow. In terms of lifetime cost and long term pavement conditions, this will result in better system performance. Agencies that concentrate on restoring their bad roads often find that by the time they have repaired them all, the roads that were in good condition have deteriorated.
Some agencies use a pavement management system to help prioritize maintenance and repairs.
Recycling
Asphalt concrete is a recyclable material that can be reclaimed and reused both on-site and in asphalt plants. The most common recycled component in asphalt concrete is reclaimed asphalt pavement (RAP). RAP is recycled at a greater rate than any other material in the United States. Many roofing shingles also contain asphalt, and asphalt concrete mixes may contain reclaimed asphalt shingles (RAS). Research has demonstrated that RAP and RAS can replace the need for up to 100% of the virgin aggregate and asphalt binder in a mix, but this percentage is typically lower due to regulatory requirements and performance concerns. In 2019, new asphalt pavement mixtures produced in the United States contained, on average, 21.1% RAP and 0.2% RAS.
Recycling methods
Recycled asphalt components may be reclaimed and transported to an asphalt plant for processing and use in new pavements, or the entire recycling process may be conducted in-place. While in-place recycling typically occurs on roadways and is specific to RAP, recycling in asphalt plants may utilize RAP, RAS, or both. In 2019, an estimated 97.0 million tons of RAP and 1.1 million tons of RAS were accepted by asphalt plants in the United States.
RAP is typically received by plants after being milled on-site, but pavements may also be ripped out in larger sections and crushed in the plant. RAP millings are typically stockpiled at plants before being incorporated into new asphalt mixes. Prior to mixing, stockpiled millings may be dried and any that have agglomerated in storage may have to be crushed.
RAS may be received by asphalt plants as post-manufacturer waste directly from shingle factories, or they may be received as post-consumer waste at the end of their service life. Processing of RAS includes grinding the shingles and sieving the grinds to remove oversized particles. The grinds may also be screened with a magnetic sieve to remove nails and other metal debris. The ground RAS is then dried, and the asphalt cement binder can be extracted. For further information on RAS processing, performance, and associated health and safety concerns, see Asphalt Shingles.
In-place recycling methods allow roadways to be rehabilitated by reclaiming the existing pavement, remixing, and repaving on-site. In-place recycling techniques include rubblizing, hot in-place recycling, cold in-place recycling, and full-depth reclamation. For further information on in-place methods, see Road Surface.
Performance
During its service life, the asphalt cement binder, which makes up about 5–6% of a typical asphalt concrete mix, naturally hardens and becomes stiffer. This aging process primarily occurs due to oxidation, evaporation, exudation, and physical hardening. For this reason, asphalt mixes containing RAP and RAS are prone to exhibiting lower workability and increased susceptibility to fatigue cracking. These issues are avoidable if the recycled components are apportioned correctly in the mix. Practicing proper storage and handling, such as by keeping RAP stockpiles out of damp areas or direct sunlight, is also important in avoiding quality issues. The binder aging process may also produce some beneficial attributes, such as by contributing to higher levels of rutting resistance in asphalts containing RAP and RAS.
One approach to balancing the performance aspects of RAP and RAS is to combine the recycled components with virgin aggregate and virgin asphalt binder. This approach can be effective when the recycled content in the mix is relatively low, and has a tendency to work more effectively with soft virgin binders. A 2020 study found that the addition of 5% RAS to a mix with a soft, low-grade virgin binder significantly increased the mix's rutting resistance while maintaining adequate fatigue cracking resistance.
In mixes with higher recycled content, the addition of virgin binder becomes less effective, and rejuvenators may be used. Rejuvenators are additives that restore the physical and chemical properties of the aged binder. When conventional mixing methods are used in asphalt plants, the upper limit for RAP content before rejuvenators become necessary has been estimated at 50%. Research has demonstrated that the use of rejuvenators at optimal doses can allow for mixes with 100% recycled components to meet the performance requirements of conventional asphalt concrete.
Other recycled materials in asphalt concrete
Beyond RAP and RAS, a range of waste materials can be re-used in place of virgin aggregate, or as rejuvenators. Crumb rubber, generated from recycled tires, has been demonstrated to improve the fatigue resistance and flexural strength of asphalt mixes that contain RAP. In California, legislative mandates require the Department of Transportation to incorporate crumb rubber into asphalt paving materials. Other recycled materials that are actively included in asphalt concrete mixes across the United States include steel slag, blast furnace slag, and cellulose fibers.
Further research has been conducted to discover new forms of waste that may be recycled into asphalt mixes. A 2020 study conducted in Melbourne, Australia presented a range of strategies for incorporating waste materials into asphalt concrete. The strategies presented in the study include the use of plastics, particularly high-density polyethylene, in asphalt binders, and the use of glass, brick, ceramic, and marble quarry waste in place of traditional aggregate.
Rejuvenators may also be produced from recycled materials, including waste engine oil, waste vegetable oil, and waste vegetable grease.
Recently, discarded face masks have been incorporated into stone mastic.
| Technology | Building materials | null |
387924 | https://en.wikipedia.org/wiki/T-square | T-square | A T-square is a technical drawing instrument used by draftsmen primarily as a guide for drawing horizontal lines on a drafting table. The instrument is named after its resemblance to the letter T, with a long shaft called the "blade" and a short shaft called the "stock" or "head". T-squares are available in a range of sizes, with common lengths being , , , and .
In addition to drawing horizontal lines, a T-square can be used with a set square to draw vertical or diagonal lines. The T-square usually has a transparent edge made of plastic which should be free of nicks and cracks in order to provide smooth, straight lines.
T-squares are also used in various industries, such as construction. For example, drywall T-squares are typically made of aluminum and have a tongue, allowing them to be used for measuring and cutting drywall. In woodworking, higher-end table saws often have T-square fences attached to a rail on the front side of the table, providing improved accuracy and precision when cutting wood.
| Technology | Artist's and drafting tools | null |
389564 | https://en.wikipedia.org/wiki/Quantitative%20research | Quantitative research | Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies.
Associated with the natural, applied, formal, and social sciences this research strategy promotes the objective empirical investigation of observable phenomena to test and understand relationships. This is done through a range of quantifying methods and techniques, reflecting on its broad utilization as a research strategy across differing academic disciplines.
There are several situations where quantitative research may not be the most appropriate or effective method to use:
1. When exploring in-depth or complex topics.
2. When studying subjective experiences and personal opinions.
3. When conducting exploratory research.
4. When studying sensitive or controversial topics
The objective of quantitative research is to develop and employ mathematical models, theories, and hypotheses pertaining to phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships.
Quantitative data is any data that is in numerical form such as statistics, percentages, etc. The researcher analyses the data with the help of statistics and hopes the numbers will yield an unbiased result that can be generalized to some larger population. Qualitative research, on the other hand, inquires deeply into specific experiences, with the intention of describing and exploring meaning through text, narrative, or visual-based data, by developing themes exclusive to that set of participants.
Quantitative research is widely used in psychology, economics, demography, sociology, marketing, community health, health & human development, gender studies, and political science; and less frequently in anthropology and history. Research in mathematical sciences, such as physics, is also "quantitative" by definition, though this use of the term differs in context. In the social sciences, the term relates to empirical methods originating in both philosophical positivism and the history of statistics, in contrast with qualitative research methods.
Qualitative research produces information only on the particular cases studied, and any more general conclusions are only hypotheses. Quantitative methods can be used to verify which of such hypotheses are true. A comprehensive analysis of 1274 articles published in the top two American sociology journals between 1935 and 2005 found that roughly two-thirds of these articles used quantitative method.
Overview
Quantitative research is generally closely affiliated with ideas from 'the scientific method', which can include:
The generation of models, theories and hypotheses
The development of instruments and methods for measurement
Experimental control and manipulation of variables
Collection of empirical data
Modeling and analysis of data
Quantitative research is often contrasted with qualitative research, which purports to be focused more on discovering underlying meanings and patterns of relationships, including classifications of types of phenomena and entities, in a manner that does not involve mathematical models. Approaches to quantitative psychology were first modeled on quantitative approaches in the physical sciences by Gustav Fechner in his work on psychophysics, which built on the work of Ernst Heinrich Weber. Although a distinction is commonly drawn between qualitative and quantitative aspects of scientific investigation, it has been argued that the two go hand in hand. For example, based on analysis of the history of science, Kuhn concludes that "large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences". Qualitative research is often used to gain a general sense of phenomena and to form theories that can be tested using further quantitative research. For instance, in the social sciences qualitative research methods are often used to gain better understanding of such things as intentionality (from the speech response of the researchee) and meaning (why did this person/group say something and what did it mean to them?) (Kieron Yeoman).
Although quantitative investigation of the world has existed since people first began to record events or objects that had been counted, the modern idea of quantitative processes have their roots in Auguste Comte's positivist framework. Positivism emphasized the use of the scientific method through observation to empirically test hypotheses explaining and predicting what, where, why, how, and when phenomena occurred. Positivist scholars like Comte believed only scientific methods rather than previous spiritual explanations
for human behavior could advance.
Quantitative methods are an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative methods,
reviews of the literature (including scholarly), interviews with experts and computer simulation, and which forms an extension of data triangulation.
Quantitative methods have limitations. These studies do not provide reasoning behind participants' responses, they often do not reach underrepresented populations, and they may span long periods in order to collect the data.
Use of statistics
Statistics is the most widely used branch of mathematics in quantitative research outside of the physical sciences, and also finds applications within the physical sciences, such as in statistical mechanics. Statistical methods are used extensively within fields such as economics, social sciences and biology. Quantitative research using statistical methods starts with the collection of data, based on the hypothesis or theory. Usually a big sample of data is collected – this would require verification, validation and recording before the analysis can take place. Software packages such as SPSS and R are typically used for this purpose. Causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. In the field of health, for example, researchers might measure and study the relationship between dietary intake and measurable physiological effects such as weight loss, controlling for other key variables such as exercise. Quantitatively based opinion surveys are widely used in the media, with statistics such as the proportion of respondents in favor of a position commonly reported. In opinion surveys, respondents are asked a set of structured questions and their responses are tabulated. In the field of climate science, researchers compile and compare statistics such as temperature or atmospheric concentrations of carbon dioxide.
Empirical relationships and associations are also frequently studied by using some form of general linear model, non-linear model, or by using factor analysis. A fundamental principle in quantitative research is that correlation does not imply causation, although some such as Clive Granger suggest that a series of correlations can imply a degree of causality. This principle follows from the fact that it is always possible a spurious relationship exists for variables between which covariance is found in some degree. Associations may be examined between any combination of continuous and categorical variables using methods of statistics. Other data analytical approaches for studying causal relations can be performed with Necessary Condition Analysis (NCA), which outlines must-have conditions for the studied outcome variable.
Measurement
Views regarding the role of measurement in quantitative research are somewhat divergent. Measurement is often regarded as being only a means by which observations are expressed numerically in order to investigate causal relations or associations. However, it has been argued that measurement often plays a more important role in quantitative research. For example, Kuhn argued that within quantitative research, the results that are shown can prove to be strange. This is because accepting a theory based on results of quantitative data could prove to be a natural phenomenon. He argued that such abnormalities are interesting when done during the process of obtaining data, as seen below:
When measurement departs from theory, it is likely to yield mere numbers, and their very neutrality makes them particularly sterile as a source of remedial suggestions. But numbers register the departure from theory with an authority and finesse that no qualitative technique can duplicate, and that departure is often enough to start a search (Kuhn, 1961, p. 180).
In classical physics, the theory and definitions which underpin measurement are generally deterministic in nature. In contrast, probabilistic measurement models known as the Rasch model and Item response theory models are generally employed in the social sciences. Psychometrics is the field of study concerned with the theory and technique for measuring social and psychological attributes and phenomena. This field is central to much quantitative research that is undertaken within the social sciences.
Quantitative research may involve the use of proxies as stand-ins for other quantities that cannot be directly measured. Tree-ring width, for example, is considered a reliable proxy of ambient environmental conditions such as the warmth of growing seasons or amount of rainfall. Although scientists cannot directly measure the temperature of past years, tree-ring width and other climate proxies have been used to provide a semi-quantitative record of average temperature in the Northern Hemisphere back to 1000 A.D. When used in this way, the proxy record (tree ring width, say) only reconstructs a certain amount of the variance of the original record. The proxy may be calibrated (for example, during the period of the instrumental record) to determine how much variation is captured, including whether both short and long term variation is revealed. In the case of tree-ring width, different species in different places may show more or less sensitivity to, say, rainfall or temperature: when reconstructing a temperature record there is considerable skill in selecting proxies that are well correlated with the desired variable.
Relationship with qualitative methods
In most physical and biological sciences, the use of either quantitative or qualitative methods is uncontroversial, and each is used when appropriate. In the social sciences, particularly in sociology, social anthropology and psychology, the use of one or other type of method can be a matter of controversy and even ideology, with particular schools of thought within each discipline favouring one type of method and pouring scorn on to the other. The majority tendency throughout the history of social science, however, is to use eclectic approaches-by combining both methods. Qualitative methods might be used to understand the meaning of the conclusions produced by quantitative methods. Using quantitative methods, it is possible to give precise and testable expression to qualitative ideas. This combination of quantitative and qualitative data gathering is often referred to as mixed-methods research.
Examples
Research that consists of the percentage amounts of all the elements that make up Earth's atmosphere.
Survey that concludes that the average patient has to wait two hours in the waiting room of a certain doctor before being selected.
An experiment in which group x was given two tablets of aspirin a day and group y was given two tablets of a placebo a day where each participant is randomly assigned to one or other of the groups. The numerical factors such as two tablets, percent of elements and the time of waiting make the situations and results quantitative.
In economics, quantitative research is used to analyze business enterprises and the factors contributing to the diversity of organizational structures and the relationships of firms with labour, capital and product markets.
| Physical sciences | Research methods | Basics and measurement |
389950 | https://en.wikipedia.org/wiki/Diesel%20locomotive | Diesel locomotive | A diesel locomotive is a type of railway locomotive in which the power source is a diesel engine. Several types of diesel locomotives have been developed, differing mainly in the means by which mechanical power is conveyed to the driving wheels. The most common are diesel–electric locomotives and diesel–hydraulic.
Early internal combustion locomotives and railcars used kerosene and gasoline as their fuel. Rudolf Diesel patented his first compression-ignition engine in 1898, and steady improvements to the design of diesel engines reduced their physical size and improved their power-to-weight ratios to a point where one could be mounted in a locomotive. Internal combustion engines only operate efficiently within a limited power band, and while low-power gasoline engines could be coupled to mechanical transmissions, the more powerful diesel engines required the development of new forms of transmission. This is because clutches would need to be very large at these power levels and would not fit in a standard -wide locomotive frame, or would wear too quickly to be useful.
The first successful diesel engines used diesel–electric transmissions, and by 1925 a small number of diesel locomotives of were in service in the United States. In 1930, Armstrong Whitworth of the United Kingdom delivered two locomotives using Sulzer-designed engines to Buenos Aires Great Southern Railway of Argentina. In 1933, diesel–electric technology developed by Maybach was used to propel the DRG Class SVT 877, a high-speed intercity two-car set, and went into series production with other streamlined car sets in Germany starting in 1935. In the United States, diesel–electric propulsion was brought to high-speed mainline passenger service in late 1934, largely through the research and development efforts of General Motors dating back to the late 1920s and advances in lightweight car body design by the Budd Company.
The economic recovery from World War II hastened the widespread adoption of diesel locomotives in many countries. They offered greater flexibility and performance than steam locomotives, as well as substantially lower operating and maintenance costs.
History
Adaptation for rail use
The earliest recorded example of the use of an internal combustion engine in a railway locomotive is the prototype designed by William Dent Priestman, which was examined by William Thomson, 1st Baron Kelvin in 1888 who described it as a "Priestman oil engine mounted upon a truck which is worked on a temporary line of rails to show the adaptation of a petroleum engine for locomotive purposes." In 1894, a two-axle machine built by Priestman Brothers was used on the Hull Docks. In 1896, an oil-engined railway locomotive was built for the Royal Arsenal in Woolwich, England, using an engine designed by Herbert Akroyd Stuart. It was not a diesel, because it used a hot-bulb engine (also known as a semi-diesel), but it was the precursor of the diesel.
Rudolf Diesel considered using his engine for powering locomotives in his 1893 book Theorie und Konstruktion eines rationellen Wärmemotors zum Ersatz der Dampfmaschine und der heute bekannten Verbrennungsmotoren (Theory and Construction of a Rational Heat Motor). However, the large size and poor power-to-weight ratio of early diesel engines made them unsuitable for propelling land-based vehicles. Therefore, the engine's potential as a railroad prime mover was not initially recognized. This changed as research and development reduced the size and weight of the engine.
In 1906, Rudolf Diesel, Adolf Klose and the steam and diesel engine manufacturer Gebrüder Sulzer founded Diesel-Sulzer-Klose GmbH to manufacture diesel-powered locomotives. Sulzer had been manufacturing diesel engines since 1898. The Prussian State Railways ordered a diesel locomotive from the company in 1909, and after test runs between Winterthur and Romanshorn, Switzerland, the diesel–mechanical locomotive was delivered in Berlin in September 1912. The world's first diesel-powered locomotive was operated in the summer of 1912 on the same line from Winterthur but was not a commercial success. During test runs in 1913 several problems were found. The outbreak of World War I in 1914 prevented all further trials. The locomotive weight was 95 tonnes and the power was with a maximum speed of .
Small numbers of prototype diesel locomotives were produced in a number of countries through the mid-1920s.
Early diesel locomotives and railcars in Asia
China
One of the first domestically developed Diesel vehicles of China was the Dongfeng DMU (东风), produced in 1958 by CSR Sifang. Series production of China's first Diesel locomotive class, the DFH1, began in 1964 following the construction of a prototype in 1959.
India
Japan
In Japan, starting in the 1920s, some petrol–electric railcars were produced. The first diesel–electric traction and the first air-streamed vehicles on Japanese rails were the two DMU3s of class Kiha 43000 (キハ43000系). Japan's first series of diesel locomotives was class DD50 (国鉄DD50形), twin locomotives, developed since 1950 and in service since 1953.
Early diesel locomotives and railcars in Europe
First functional diesel vehicles
In 1914, the world's first functional diesel–electric railcars were produced for the Königlich-Sächsische Staatseisenbahnen (Royal Saxon State Railways) by Waggonfabrik Rastatt with electric equipment from Brown, Boveri & Cie and diesel engines from Swiss Sulzer AG. They were classified as DET 1 and DET 2 (). Because of a shortage of petrol products during World War I, they remained unused for regular service in Germany. In 1922, they were sold to Swiss Compagnie du Chemin de fer Régional du Val-de-Travers, where they were used in regular service up to the electrification of the line in 1944. Afterwards, the company kept them in service as boosters until 1965.
Fiat claims to have built the first Italian diesel–electric locomotive in 1922, but little detail is available. Several Fiat-TIBB Bo'Bo' diesel–locomotives were built for service on the narrow gauge Ferrovie Calabro Lucane and the Società per le Strade Ferrate del Mediterrano in southern Italy in 1926, following trials in 1924–25. The six-cylinder two-stroke motor produced at 500rpm, driving four DC motors, one for each axle. These locomotives with top speed proved quite successful.
In 1924, two diesel–electric locomotives were taken in service by the Soviet railways, almost at the same time:
The engine Ээл2 (Eel2 original number Юэ 001/Yu-e 001) started on October 22. It had been designed by a team led by Yuri Lomonosov and built 1923–1924 by Maschinenfabrik Esslingen in Germany. It had five driving axles (1'E1'). After several test rides, it hauled trains for almost three decades from 1925 to 1954. It became a model for several classes of Soviet diesel locomotives.
The engine Щэл1 (Shch-el 1, original number Юэ2/Yu-e 2), started on November 9. It had been developed by Yakov Modestovich Gakkel and built by Baltic Shipyard in Saint Petersburg. It had ten driving axles in three bogies (1' Co' Do' Co' 1'). From 1925 to 1927, it hauled trains between Moscow and Kursk and in Caucasus region. Due to technical problems, afterwards, it was out of service. Since 1934, it was used as a stationary electric generator.
In 1935, Krauss-Maffei, MAN and Voith built the first diesel–hydraulic locomotive, called V 140, in Germany. Diesel–hydraulics became the mainstream in diesel locomotives in Germany since the German railways (DRG) were pleased with the performance of that engine. Serial production of diesel locomotives in Germany began after World War II.
Switchers
In many railway stations and industrial compounds, steam shunters had to be kept hot during many breaks between scattered short tasks. Therefore, diesel traction became economical for shunting before it became economical for hauling trains. The construction of diesel shunters began in 1920 in France, in 1925 in Denmark, in 1926 in the Netherlands, and in 1927 in Germany. After a few years of testing, hundreds of units were produced within a decade.
Diesel railcars for regional traffic
Diesel-powered or "oil-engined" railcars, generally diesel–mechanical, were developed by various European manufacturers in the 1930s, e.g. by William Beardmore and Company for the Canadian National Railways (the Beardmore Tornado engine was subsequently used in the R101 airship). Some of those series for regional traffic were begun with gasoline motors and then continued with diesel motors, such as Hungarian BCmot (The class code doesn't tell anything but "railmotor with 2nd and 3rd class seats".), 128 cars built 1926–1937, or German Wismar railbuses (57 cars 1932–1941). In France, the first diesel railcar was Renault VH, 115 units produced 1933/34.
In Italy, after six Gasoline cars since 1931, Fiat and Breda built a lot of diesel railmotors, more than 110 from 1933 to 1938 and 390 from 1940 to 1953, Class 772 known as Littorina, and Class ALn 900.
High-speed railcars
In the 1930s, streamlined highspeed diesel railcars were developed in several countries:
In Germany, the Flying Hamburger was built in 1932. After a test ride in December 1932, this two-coach diesel railcar (in English terminology a DMU2) started service at Deutsche Reichsbahn (DRG) in February 1933. It became the prototype of DRG Class SVT 137 with 33 more highspeed DMUs, built for DRG till 1938, 13 DMU 2 ("Hamburg" series), 18 DMU 3 ("Leipzig" and "Köln" series), and two DMU 4 ("Berlin" series).
French SNCF classes XF 1000 and XF 1100 comprised 11 high-speed DMUs, also called TAR, built 1934–1939.
In Hungary, Ganz Works built the , a kind of a luxurious railbus in a series of seven items since 1934 and started to build the in 1944.
Further developments
In 1945, a batch of 30 Baldwin diesel–electric locomotives, Baldwin 0-6-6-0 1000, was delivered from the United States to the railways of the Soviet Union.
In 1947, the London, Midland and Scottish Railway (LMS) introduced the first of a pair of Co-Co diesel–electric locomotives (later British Rail Class D16/1) for regular use in the United Kingdom, although British manufacturers such as Armstrong Whitworth had been exporting diesel locomotives since 1930. Fleet deliveries to British Railways, of other designs such as Class 20 and Class 31, began in 1957.
Series production of diesel locomotives in Italy began in the mid-1950s. Generally, diesel traction in Italy was of less importance than in other countries, as it was amongst the most advanced countries in the electrification of the main lines and as Italian geography makes freight transport by sea cheaper than rail transportation even on many domestic connections.
Early diesel locomotives and railcars in North America
Early North American developments
Adolphus Busch purchased the American manufacturing rights for the diesel engine in 1898 but never applied this new form of power to transportation. He founded the Busch-Sulzer company in 1911.
Only limited success was achieved in the early twentieth century with internal combustion engined railcars, due, in part, to difficulties with mechanical drive systems.
General Electric (GE) entered the railcar market in the early twentieth century, as Thomas Edison possessed a patent on the electric locomotive, his design actually being a type of electrically propelled railcar. GE built its first electric locomotive prototype in 1895. However, high electrification costs caused GE to turn its attention to internal combustion power to provide electricity for electric railcars. Problems related to co-ordinating the prime mover and electric motor were immediately encountered, primarily due to limitations of the Ward Leonard current control system that had been chosen. GE Rail was formed in 1907 and 112 years later, in 2019, was purchased by and merged with Wabtec.
A significant breakthrough occurred in 1914, when Hermann Lemp, a GE electrical engineer, developed and patented a reliable control system that controlled the engine and traction motor with a single lever; subsequent improvements were also patented by Lemp. Lemp's design solved the problem of overloading and damaging the traction motors with excessive electrical power at low speeds, and was the prototype for all internal combustion–electric drive control systems.
In 1917–1918, GE produced three experimental diesel–electric locomotives using Lemp's control design, the first known to be built in the United States. Following this development, the 1923 Kaufman Act banned steam locomotives from New York City, because of severe pollution problems. The response to this law was to electrify high-traffic rail lines. However, electrification was uneconomical to apply to lower-traffic areas.
The first regular use of diesel–electric locomotives was in switching (shunter) applications, which were more forgiving than mainline applications of the limitations of contemporary diesel technology and where the idling economy of diesel relative to steam would be most beneficial. GE entered a collaboration with the American Locomotive Company (ALCO) and Ingersoll-Rand (the "AGEIR" consortium) in 1924 to produce a prototype "boxcab" locomotive delivered in July 1925. This locomotive demonstrated that the diesel–electric power unit could provide many of the benefits of an electric locomotive without the railroad having to bear the sizeable expense of electrification. The unit successfully demonstrated, in switching and local freight and passenger service, on ten railroads and three industrial lines. Westinghouse Electric and Baldwin collaborated to build switching locomotives starting in 1929. However, the Great Depression curtailed demand for Westinghouse's electrical equipment, and they stopped building locomotives internally, opting to supply electrical parts instead.
In June 1925, Baldwin Locomotive Works outshopped a prototype diesel–electric locomotive for "special uses" (such as for runs where water for steam locomotives was scarce) using electrical equipment from Westinghouse Electric Company. Its twin-engine design was not successful, and the unit was scrapped after a short testing and demonstration period. Industry sources were beginning to suggest "the outstanding advantages of this new form of motive power". In 1929, the Canadian National Railways became the first North American railway to use diesels in mainline service with two units, 9000 and 9001, from Westinghouse. However, these early diesels proved expensive and unreliable, with their high cost of acquisition relative to steam unable to be realized in operating cost savings as they were frequently out of service. It would be another five years before diesel–electric propulsion would be successfully used in mainline service, and nearly ten years before fully replacing steam became a real prospect with existing diesel technology.
Before diesel power could make inroads into mainline service, the limitations of diesel engines circa 1930 – low power-to-weight ratios and narrow output range – had to be overcome. A major effort to overcome those limitations was launched by General Motors after they moved into the diesel field with their acquisition of the Winton Engine Company, a major manufacturer of diesel engines for marine and stationary applications, in 1930. Supported by the General Motors Research Division, GM's Winton Engine Corporation sought to develop diesel engines suitable for high-speed mobile use. The first milestone in that effort was delivery in early 1934 of the Winton 201A, a two-stroke, mechanically aspirated, uniflow-scavenged, unit-injected diesel engine that could deliver the required performance for a fast, lightweight passenger train. The second milestone, and the one that got American railroads moving towards diesel, was the 1938 delivery of GM's Model 567 engine that was designed specifically for locomotive use, bringing a fivefold increase in life of some mechanical parts and showing its potential for meeting the rigors of freight service.
Diesel–electric railroad locomotion entered mainline service when the Burlington Route and Union Pacific used custom-built diesel "streamliners" to haul passengers, starting in late 1934. Burlington's Zephyr trainsets evolved from articulated three-car sets with 600 hp power cars in 1934 and early 1935, to the Denver Zephyr semi-articulated ten car trainsets pulled by cab-booster power sets introduced in late 1936. Union Pacific started diesel streamliner service between Chicago and Portland Oregon in June 1935, and in the following year would add Los Angeles, CA, Oakland, CA, and Denver, CO to the destinations of diesel streamliners out of Chicago. The Burlington and Union Pacific streamliners were built by the Budd Company and the Pullman-Standard Company, respectively, using the new Winton engines and power train systems designed by GM's Electro-Motive Corporation. EMC's experimental 1800 hp B-B locomotives of 1935 demonstrated the multiple-unit control systems used for the cab/booster sets and the twin-engine format used with the later Zephyr power units. Both of those features would be used in EMC's later production model locomotives. The lightweight diesel streamliners of the mid-1930s demonstrated the advantages of diesel for passenger service with breakthrough schedule times, but diesel locomotive power would not fully come of age until regular series production of mainline diesel locomotives commenced and it was shown suitable for full-size passenger and freight service.
First American series production locomotives
Following their 1925 prototype, the AGEIR consortium produced 25 more units of "60 ton" AGEIR boxcab switching locomotives between 1925 and 1928 for several New York City railroads, making them the first series-produced diesel locomotives. The consortium also produced seven twin-engine "100 ton" boxcabs and one hybrid trolley/battery unit with a diesel-driven charging circuit. ALCO acquired the McIntosh & Seymour Engine Company in 1929 and entered series production of and single-cab switcher units in 1931. ALCO would be the pre-eminent builder of switch engines through the mid-1930s and would adapt the basic switcher design to produce versatile and highly successful, albeit relatively low powered, road locomotives.
GM, seeing the success of the custom streamliners, sought to expand the market for diesel power by producing standardized locomotives under their Electro-Motive Corporation. In 1936, EMC's new factory started production of switch engines. In 1937, the factory started producing their new E series streamlined passenger locomotives, which would be upgraded with more reliable purpose-built engines in 1938. Seeing the performance and reliability of the new 567 model engine in passenger locomotives, EMC was eager to demonstrate diesel's viability in freight service.
Following the successful 1939 tour of EMC's FT demonstrator freight locomotive set, the stage was set for dieselization of American railroads. In 1941, ALCO-GE introduced the RS-1 road-switcher that occupied its own market niche while EMD's F series locomotives were sought for mainline freight service. The US entry into World War II slowed conversion to diesel; the War Production Board put a halt to building new passenger equipment and gave naval uses priority for diesel engine production. During the petroleum crisis of 1942–43, coal-fired steam had the advantage of not using fuel that was in critically short supply. EMD was later allowed to increase the production of its FT locomotives and ALCO-GE was allowed to produce a limited number of DL-109 road locomotives, but most in the locomotive business were restricted to making switch engines and steam locomotives.
In the early postwar era, EMD dominated the market for mainline locomotives with their E and F series locomotives. ALCO-GE in the late 1940s produced switchers and road-switchers that were successful in the short-haul market. However, EMD launched their GP series road-switcher locomotives in 1949, which displaced all other locomotives in the freight market including their own F series locomotives. GE subsequently dissolved its partnership with ALCO and would emerge as EMD's main competitor in the early 1960s, eventually taking the top position in the locomotive market from EMD.
Early diesel–electric locomotives in the United States used direct current (DC) traction motors but alternating current (AC) motors came into widespread use in the 1990s, starting with the Electro-Motive SD70MAC in 1993 and followed by General Electric's AC4400CW in 1994 and AC6000CW in 1995.
Early diesel locomotives and railcars in Oceania
The Trans-Australian Railway built 1912 to 1917 by Commonwealth Railways (CR) passes through 2,000 km of waterless (or salt watered) desert terrain unsuitable for steam locomotives. The original engineer Henry Deane envisaged diesel operation to overcome such problems. Some have suggested that the CR worked with the South Australian Railways to trial diesel traction. However, the technology was not developed enough to be reliable.
As in Europe, the usage of internal combustion engines advanced more readily in self-propelled railcars than in locomotives:
Some Australian railway companies bought McKeen railmotors.
In the 1920s and 1930s, more reliable Gasoline railmotors were built by Australian industries.
Australia's first diesel railcars were the NSWGR 100 Class (PH later DP) Silver City Comet cars in 1937.
High-speed vehicles for those days' possibilities on were the ten Vulcan railcars of 1940 for New Zealand.
Transmission types
Diesel–mechanical
A diesel–mechanical locomotive uses a mechanical transmission in a fashion similar to that employed in most road vehicles. This type of transmission is generally limited to low-powered, low-speed shunting (switching) locomotives, lightweight multiple units and self-propelled railcars.
The mechanical transmissions used for railroad propulsion are generally more complex and much more robust than standard-road versions. There is usually a fluid coupling interposed between the engine and gearbox, and the gearbox is often of the epicyclic (planetary) type to permit shifting while under load. Various systems have been devised to minimise the break in transmission during gear changing, such as the S.S.S. (synchro-self-shifting) gearbox used by Hudswell Clarke.
Diesel–mechanical propulsion is limited by the difficulty of building a reasonably sized transmission capable of coping with the power and torque required to move a heavy train. A number of attempts to use diesel–mechanical propulsion in high power applications have been made (for example, the British Rail 10100 locomotive), though only few have proven successful (such as the DSB Class MF).
Diesel–electric
In a diesel–electric locomotive, the diesel engine drives either an electrical DC generator (generally, less than net for traction), or an electrical AC alternator-rectifier (generally 3,000hp net or more for traction), the output of which provides power to the traction motors that drive the locomotive. There is no mechanical connection between the diesel engine and the wheels.
The important components of diesel–electric propulsion are the diesel engine (also known as the prime mover), the main generator/alternator-rectifier, traction motors (usually with four or six axles), and a control system consisting of the engine governor and electrical or electronic components, including switchgear, rectifiers and other components, which control or modify the electrical supply to the traction motors. In the most elementary case, the generator may be directly connected to the motors with only very simple switchgear.
Originally, the traction motors and generator were DC machines. Following the development of high-capacity silicon rectifiers in the 1960s, the DC generator was replaced by an alternator using a diode bridge to convert its output to DC. This advance greatly improved locomotive reliability and decreased generator maintenance costs by elimination of the commutator and brushes in the generator. Elimination of the brushes and commutator, in turn, eliminated the possibility of a particularly destructive type of event referred to as a flashover (also known as an arc fault), which could result in immediate generator failure and, in some cases, start an engine room fire.
Current North American practice is for four axles for high-speed passenger or "time" freight, or for six axles for lower-speed or "manifest" freight. The most modern units on "time" freight service tend to have six axles underneath the frame. Unlike those in "manifest" service, "time" freight units will have only four of the axles connected to traction motors, with the other two as idler axles for weight distribution.
In the late 1980s, the development of high-power variable-voltage/variable-frequency (VVVF) drives, or "traction inverters", allowed the use of polyphase AC traction motors, thereby also eliminating the motor commutator and brushes. The result is a more efficient and reliable drive that requires relatively little maintenance and is better able to cope with overload conditions that often destroyed the older types of motors.
Diesel–electric control
A diesel–electric locomotive's power output is independent of road speed, as long as the unit's generator current and voltage limits are not exceeded. Therefore, the unit's ability to develop tractive effort (also referred to as drawbar pull or tractive force, which is what actually propels the train) will tend to inversely vary with speed within these limits. (See power curve below). Maintaining acceptable operating parameters was one of the principal design considerations that had to be solved in early diesel–electric locomotive development and, ultimately, led to the complex control systems in place on modern units.
Throttle operation
The prime mover's power output is primarily determined by its rotational speed (RPM) and fuel rate, which are regulated by a governor or similar mechanism. The governor is designed to react to both the throttle setting, as determined by the engine driver and the speed at which the prime mover is running (see Control theory).
Locomotive power output, and therefore speed, is typically controlled by the engine driver using a stepped or "notched" throttle that produces binary-like electrical signals corresponding to throttle position. This basic design lends itself well to multiple unit (MU) operation by producing discrete conditions that assure that all units in a consist respond in the same way to throttle position. Binary encoding also helps to minimize the number of trainlines (electrical connections) that are required to pass signals from unit to unit. For example, only four trainlines are required to encode all possible throttle positions if there are up to 14 stages of throttling.
North American locomotives, such as those built by EMD or General Electric, have eight throttle positions or "notches" as well as a "reverser" to allow them to operate bi-directionally. Many UK-built locomotives have a ten-position throttle. The power positions are often referred to by locomotive crews depending upon the throttle setting, such as "run 3" or "notch 3".
In older locomotives, the throttle mechanism was ratcheted so that it was not possible to advance more than one power position at a time. The engine driver could not, for example, pull the throttle from notch 2 to notch 4 without stopping at notch 3. This feature was intended to prevent rough train handling due to abrupt power increases caused by rapid throttle motion ("throttle stripping", an operating rules violation on many railroads). Modern locomotives no longer have this restriction, as their control systems are able to smoothly modulate power and avoid sudden changes in train loading regardless of how the engine driver operates the controls.
When the throttle is in the idle position, the prime mover receives minimal fuel, causing it to idle at low RPM. In addition, the traction motors are not connected to the main generator and the generator's field windings are not excited (energized) – the generator does not produce electricity without excitation. Therefore, the locomotive will be in "neutral". Conceptually, this is the same as placing an automobile's transmission into neutral while the engine is running.
To set the locomotive in motion, the reverser control handle is placed into the correct position (forward or reverse), the brake is released and the throttle is moved to the run 1 position (the first power notch). An experienced engine driver can accomplish these steps in a coordinated fashion that will result in a nearly imperceptible start. The positioning of the reverser and movement of the throttle together is conceptually like shifting an automobile's automatic transmission into gear while the engine is idling.
Placing the throttle into the first power position will cause the traction motors to be connected to the main generator and the latter's field coils to be excited. With excitation applied, the main generator will deliver electricity to the traction motors, resulting in motion. If the locomotive is running "light" (that is, not coupled to the rest of a train) and is not on an ascending grade, it will easily accelerate. On the other hand, if a long train is being started, the locomotive may stall as soon as some of the slack has been taken up, as the drag imposed by the train will exceed the tractive force being developed. An experienced engine driver will be able to recognize an incipient stall and will gradually advance the throttle as required to maintain the pace of acceleration.
As the throttle is moved to higher power notches, the fuel rate to the prime mover will increase, resulting in a corresponding increase in RPM and horsepower output. At the same time, main generator field excitation will be proportionally increased to absorb the higher power. This will translate into increased electrical output to the traction motors, with a corresponding increase in tractive force. Eventually, depending on the requirements of the train's schedule, the engine driver will have moved the throttle to the position of maximum power and will maintain it there until the train has accelerated to the desired speed.
The propulsion system is designed to produce maximum traction motor torque at start-up, which explains why modern locomotives are capable of starting trains weighing in excess of 15,000 tons, even on ascending grades.
Current technology allows a locomotive to develop as much as 30% of its loaded driver weight in tractive force, amounting to of tractive force for a large, six-axle freight (goods) unit.
In fact, a consist of such units can produce more than enough drawbar pull at start-up to damage or derail cars (if on a curve) or break couplers (the latter being referred to in North American railroad slang as "jerking a lung"). Therefore, it is incumbent upon the engine driver to carefully monitor the amount of power being applied at start-up to avoid damage. In particular, "jerking a lung" could be a calamitous matter if it were to occur on an ascending grade, except that the safety inherent in the correct operation of fail-safe automatic train brakes installed in wagons today prevents runaway trains by automatically applying the wagon brakes when train line air pressure drops.
Propulsion system operation
A locomotive's control system is designed so that the main generator electrical power output is matched to any given engine speed. Given the innate characteristics of traction motors, as well as the way in which the motors are connected to the main generator, the generator will produce high current and low voltage at low locomotive speeds, gradually changing to low current and high voltage as the locomotive accelerates. Therefore, the net power produced by the locomotive will remain constant for any given throttle setting (see power curve graph for notch 8).
In older designs, the prime mover's governor and a companion device, the load regulator, play a central role in the control system. The governor has two external inputs: requested engine speed, determined by the engine driver's throttle setting, and actual engine speed (feedback). The governor has two external control outputs: fuel injector setting, which determines the engine fuel rate, and current regulator position, which affects main generator excitation. The governor also incorporates a separate overspeed protective mechanism that will immediately cut off the fuel supply to the injectors and sound an alarm in the cab in the event the prime mover exceeds a defined RPM. Not all of these inputs and outputs are necessarily electrical.
As the load on the engine changes, its rotational speed will also change. This is detected by the governor through a change in the engine speed feedback signal. The net effect is to adjust both the fuel rate and the load regulator position so that engine RPM and torque (and therefore power output) will remain constant for any given throttle setting, regardless of actual road speed.
In newer designs controlled by a "traction computer," each engine speed step is allotted an appropriate power output, or "kW reference", in software. The computer compares this value with actual main generator power output, or "kW feedback", calculated from traction motor current and main generator voltage feedback values. The computer adjusts the feedback value to match the reference value by controlling the excitation of the main generator, as described above. The governor still has control of engine speed, but the load regulator no longer plays a central role in this type of control system. However, the load regulator is retained as a "back-up" in case of engine overload. Modern locomotives fitted with electronic fuel injection (EFI) may have no mechanical governor; however, a "virtual" load regulator and governor are retained with computer modules.
Traction motor performance is controlled either by varying the DC voltage output of the main generator, for DC motors, or by varying the frequency and voltage output of the VVVF for AC motors. With DC motors, various connection combinations are utilized to adapt the drive to varying operating conditions.
At standstill, main generator output is initially low voltage/high current, often in excess of 1000 amperes per motor at full power. When the locomotive is at or near standstill, current flow will be limited only by the DC resistance of the motor windings and interconnecting circuitry, as well as the capacity of the main generator itself. Torque in a series-wound motor is approximately proportional to the square of the current. Hence, the traction motors will produce their highest torque, causing the locomotive to develop maximum tractive effort, enabling it to overcome the inertia of the train. This effect is analogous to what happens in an automobile automatic transmission at start-up, where it is in first gear and thereby producing maximum torque multiplication.
As the locomotive accelerates, the now-rotating motor armatures will start to generate a counter-electromotive force (back EMF, meaning the motors are also trying to act as generators), which will oppose the output of the main generator and cause traction motor current to decrease. Main generator voltage will correspondingly increase in an attempt to maintain motor power but will eventually reach a plateau. At this point, the locomotive will essentially cease to accelerate, unless on a downgrade. Since this plateau will usually be reached at a speed substantially less than the maximum that may be desired, something must be done to change the drive characteristics to allow continued acceleration. This change is referred to as "transition", a process that is analogous to shifting gears in an automobile.
Transition methods include:
Series / Parallel or "motor transition".
Initially, pairs of motors are connected in series across the main generator. At higher speed, motors are reconnected in parallel across the main generator.
"Field shunting", "field diverting", or "weak fielding".
Resistance is connected in parallel with the motor field. This has the effect of increasing the armature current, producing a corresponding increase in motor torque and speed.
Both methods may also be combined, to increase the operating speed range.
Generator / rectifier transition
Reconnecting the two separate internal main generator stator windings of two rectifiers from parallel to series to increase the output voltage.
In older locomotives, it was necessary for the engine driver to manually execute transition by use of a separate control. As an aid to performing transition at the right time, the load meter (an indicator that shows the engine driver how much current is being drawn by the traction motors) was calibrated to indicate at which points forward or backward transition should take place. Automatic transition was subsequently developed to produce better-operating efficiency and to protect the main generator and traction motors from overloading from improper transition.
Modern locomotives incorporate traction inverters, AC to DC, capable of delivering 1,200 volts (earlier traction generators, DC to DC, were capable of delivering only 600 volts). This improvement was accomplished largely through improvements in silicon diode technology. With the capability of delivering 1,200 volts to the traction motors, the need for "transition" was eliminated.
Dynamic braking
A common option on diesel–electric locomotives is dynamic (rheostatic) braking.
Dynamic braking takes advantage of the fact that the traction motor armatures are always rotating when the locomotive is in motion and that a motor can be made to act as a generator by separately exciting the field winding. When dynamic braking is used, the traction control circuits are configured as follows:
The field winding of each traction motor is connected across the main generator.
The armature of each traction motor is connected across a forced-air-cooled resistance grid (the dynamic braking grid) in the roof of the locomotive's hood.
The prime mover rotational speed is increased, and the main generator field is excited, causing a corresponding excitation of the traction motor fields.
The aggregate effect of the above is to cause each traction motor to generate electric power and dissipate it as heat in the dynamic braking grid. A fan connected across the grid provides forced-air cooling. Consequently, the fan is powered by the output of the traction motors and will tend to run faster and produce more airflow as more energy is applied to the grid.
Ultimately, the source of the energy dissipated in the dynamic braking grid is the motion of the locomotive as imparted to the traction motor armatures. Therefore, the traction motors impose drag and the locomotive acts as a brake. As speed decreases, the braking effect decays and usually becomes ineffective below approximately 16 km/h (10 mph), depending on the gear ratio between the traction motors and axles.
Dynamic braking is particularly beneficial when operating in mountainous regions, where there is always the danger of a runaway due to overheated friction brakes during descent. In such cases, dynamic brakes are usually applied in conjunction with the air brakes, the combined effect being referred to as blended braking. The use of blended braking can also assist in keeping the slack in a long train stretched as it crests a grade, helping to prevent a "run-in", an abrupt bunching of train slack that can cause a derailment. Blended braking is also commonly used with commuter trains to reduce wear and tear on the mechanical brakes that is a natural result of the numerous stops such trains typically make during a run.
Electro-diesel
These special locomotives can operate as an electric locomotive or as a diesel locomotive. The Long Island Rail Road, Metro-North Railroad and New Jersey Transit Rail Operations operate dual-mode diesel–electric/third-rail (catenary on NJTransit) locomotives between non-electrified territory and New York City because of a local law banning diesel-powered locomotives in Manhattan tunnels. For the same reason, Amtrak operates a fleet of dual-mode locomotives in the New York area. British Rail operated dual diesel–electric/electric locomotives designed to run primarily as electric locomotives with reduced power available when running on diesel power. This allowed railway yards to remain unelectrified, as the third rail power system is extremely hazardous in a yard area.
Diesel–hydraulic
Diesel–hydraulic locomotives use one or more torque converters, in combination with fixed ratio gears. Drive shafts and gears form the final drive to convey the power from the torque converters to the wheels, and to effect reverse. The difference between hydraulic and mechanical systems is where the speed and torque is adjusted. In the mechanical transmission system that has multiple ratios such as in a gear box, if there is a hydraulic section, it is only to allow the engine to run when the train is too slow or stopped. In the hydraulic system, hydraulics are the primary system for adapting engine speed and torque to the train's situation, with gear selection for only limited use, such as reverse gear.
Hydrostatic transmission
Hydraulic drive systems using a hydrostatic hydraulic drive system have been applied to rail use. Modern examples included shunting locomotives by Cockerill (Belgium), 4 to 12 tonne narrow gauge industrial locomotives by Atlas Copco subsidiary GIA. Hydrostatic drives are also utilised in railway maintenance machines (tampers, rail grinders).
Application of hydrostatic transmissions is generally limited to small shunting locomotives and rail maintenance equipment, as well as being used for non-tractive applications in diesel engines such as drives for traction motor fans.
Hydrokinetic transmission
Hydrokinetic transmission (also called hydrodynamic transmission) uses a torque converter. A torque converter consists of three main parts, two of which rotate, and one (the stator) that has a lock preventing backwards rotation and adding output torque by redirecting the oil flow at low output rotational speeds. All three main parts are sealed in an oil-filled housing. To match engine speed to load speed over the entire speed range of a locomotive some additional method is required to give sufficient range. One method is to follow the torque converter with a mechanical gearbox which switches ratios automatically, similar to an automatic transmission in an automobile. Another method is to provide several torque converters each with a range of variability covering part of the total required; all the torque converters are mechanically connected all the time, and the appropriate one for the speed range required is selected by filling it with oil and draining the others. The filling and draining is carried out with the transmission under load, and results in very smooth range changes with no break in the transmitted power.
Locomotives
Whilst diesel–electric (DE) locomotives were chosen around most of the world, a few countries turned toward Diesel Hydraulic (DH) locomotives instead, most notably Germany, Finland & Japan, as well as Britain for a time.
The reasons for this were multiple, two of the more notable being that whilst most DH locomotives achieved about the same drivetrain efficiency as DEs of around ~85% (with some early British designs being the exception), they could at the same time be built noticeably lighter for the same total power output. This was the case as the hydraulic transmissions didn't weigh nearly as much as the combination of generator(s) & multiple electric traction motors necessary on a DE.
The second notable advantage with DH locomotives, which lasted up until the introduction of modern traction control systems, was increased adhesion/traction per unit of weight. Normally on a DE locomotive every powered axle on a bogey features its own separate traction motor with no linkage between axles, and as such there is the potential that if one wheel loses grip and slips, this will cause the axle to spin faster independently from the others, resulting in a significant loss of overall traction. By contrast on a DH locomotive all the axles on each bogey are linked together via coupled drive shafts, and as such no single axle can begin to spin faster on its own should its wheels hit a slippery spot, greatly helping with traction. Prior to the introduction of effective traction control systems this technical difference alone could contribute anywhere between a 15–33% increase to the factor of adhesion for a diesel–hydraulic versus a diesel–electric locomotive.
These two advantages were some of the main reasons why in the 1960's three major US railroad companies, incl. Southern Pacific, initially expressed great interest in diesel hydraulic locomotive designs, eventually leading to the order and purchase of several West German ML4000 DH locomotives built specifically for the US by the firm Krauss Maffei. Reliability problems with these machines during high altitude operations with SP in the US, as well as the advent of domestic diesel engines of similar power levels coupled with an industry better suited for supporting diesel electric powertrains, however, meant that eventually interest in diesel hydraulics faded away in the US.
In Germany and Finland however, diesel–hydraulic systems achieved a very high reliability in operation, similar to or even better than DEs, which when coupled with the DHs aforementioned technical advantages helped make it the more popular type of diesel locomotive in these countries for a long time. Meanwhile in the UK the diesel–hydraulic principle gained a more mixed reputation.
By the 21st century, for diesel locomotive traction worldwide the majority of countries used diesel–electric designs, with diesel–hydraulic designs not found in use outside Germany, Finland and Japan, and some neighbouring states, where it is used in designs for freight work.
Diesel–hydraulic locomotives have a smaller market share than diesel electrics – the main worldwide user of main-line hydraulic transmissions has been the Federal Republic of Germany, with designs including the 1950s DB class V 200, and the 1960 and 1970s DB Class V 160 family. British Rail introduced a number of diesel–hydraulic designs during its 1955 Modernisation Plan, initially license-built versions of German designs (see Category:Diesel–hydraulic locomotives of Great Britain). In Spain, Renfe used high power to weight ratio twin-engine German designs to haul high speed trains from the 1960s to 1990s. (See Renfe Classes 340, 350, 352, 353, 354)
Other main-line locomotives of the post-war period included the 1950s GMD GMDH-1 experimental locomotives; the Henschel & Son built South African Class 61-000; in the 1960s Southern Pacific bought 18 Krauss-Maffei KM ML-4000 diesel–hydraulic locomotives. The Denver & Rio Grande Western Railroad also bought three, all of which were later sold to SP.
In Finland, over 200 Finnish-built VR class Dv12 and Dr14 diesel–hydraulics with Voith transmissions have been continuously used since the early 1960s. All units of Dr14 class and most units of Dv12 class are still in service. VR has abandoned some weak-conditioned units of 2700 series Dv12s.
In the 21st century series production standard gauge diesel–hydraulic designs include the Voith Gravita, ordered by Deutsche Bahn, and the Vossloh G2000 BB, G1206 and G1700 designs, all manufactured in Germany for freight use.
Multiple units
Diesel–hydraulic drive is common in multiple units, with various transmission designs used including Voith torque converters, and fluid couplings in combination with mechanical gearing.
The majority of British Rail's second generation passenger DMU stock used hydraulic transmission. In the 21st century, designs using hydraulic transmission include Bombardier Turbostar, Talent, RegioSwinger families; diesel engined versions of the Siemens Desiro platform, and the Stadler Regio-Shuttle.
Diesel–steam
Steam–diesel hybrid locomotives can use steam generated from a boiler or diesel to power a piston engine. The Cristiani Compressed Steam System used a diesel engine to power a compressor to drive and recirculate steam produced by a boiler; effectively using steam as the power transmission medium, with the diesel engine being the prime mover.
Diesel–pneumatic
The diesel–pneumatic locomotive was of interest in the 1930s because it offered the possibility of converting existing steam locomotives to diesel operation. The frame and cylinders of the steam locomotive would be retained and the boiler would be replaced by a diesel engine driving an air compressor. The problem was low thermal efficiency, with the air compressor losing much heat to the environment. Attempts were made to compensate for this by using the diesel exhaust to re-heat the compressed air but these had limited success. A German proposal of 1929 did result in a prototype but a similar British proposal of 1932, to use an LNER Class R1 locomotive, never got beyond the design stage.
Multiple-unit operation
Most diesel locomotives are capable of multiple-unit operation (MU) as a means of increasing horsepower and tractive effort when hauling heavy trains. All North American locomotives, including export models, use a standardized AAR electrical control system interconnected by a 27-pin MU cable between the units. For UK-built locomotives, a number of incompatible control systems are used, but the most common is the Blue Star system, which is electro-pneumatic and fitted to most early diesel classes. A small number of types, typically higher-powered locomotives intended for passenger-only work, do not have multiple control systems. In all cases, the electrical control connections made common to all units in a consist are referred to as trainlines.
The result is that all locomotives in a consist behave as one in response to the engine driver's control movements.
The ability to couple diesel–electric locomotives in an MU fashion was first introduced in the EMC EA/EB of 1937. Electrical interconnections were made so one engine driver could operate the entire consist from the head-end unit.
In mountainous regions, it is common to interpose helper locomotives in the middle of the train, both to provide the extra power needed to ascend a grade and to limit the amount of stress applied to the draft gear of the car coupled to the head-end power. The helper units in such distributed power configurations are controlled from the lead unit's cab through coded radio signals. Although this is technically not an MU configuration, the behaviour is the same as with physically interconnected units.
Cab arrangements
Cab arrangements vary by builder and operator. Practice in the U.S. has traditionally been for a cab at one end of the locomotive with limited visibility if the locomotive is not operated cab forward. This is not usually a problem as U.S. locomotives are usually operated in pairs, or threes, and arranged so that a cab is at each end of each set. European practice is usually for a cab at each end of the locomotive as trains are usually light enough to operate with one locomotive. Early U.S. practice was to add power units without cabs (booster or B units) and the arrangement was often A-B, A-A, A-B-A, A-B-B, or A-B-B-A where A was a unit with a cab. Center cabs were sometimes used for switch locomotives.
Cow–calf
In North American railroading, a cow–calf set is a pair of switcher-type locomotives: one (the cow) equipped with a driving cab, the other (the calf) without a cab, such that the pair can be controlled from the single cab. This arrangement is also known as master–slave. Cow–calf sets are used in heavy switching and hump yard service. Some are radio-controlled without an operating engineer present in the cab. Where two connected units were present, EMD called these TR-2s (with approximately ); where three units, TR-3s (with approximately ).
Cow–calves have largely disappeared as these engine combinations exceeded their economic lifetimes many years ago.
Present North American practice is to pair two 3,000hp GP40-2 or SD40-2 road switchers, often nearly worn-out and very soon ready for rebuilding or scrapping, and to utilize these for so-called transfer uses, for which the TR-2, TR-3 and TR-4 engines were originally intended, hence the designation TR, for transfer.
Occasionally, the second unit may have its prime mover and traction alternator removed and replaced by concrete or steel ballast and the power for traction obtained from a master unit. As a 16-cylinder prime mover generally weighs in the range, and a 3,000hp traction alternator generally weighs in the range, around would be needed for ballast.
A pair of fully capable Dash 2 units would be rated . A Dash 2 pair in which only one had a prime-mover and alternator would be rated , with all power provided by the master, while the combination benefits from the tractive effort provided by the slave as engines in transfer service are seldom called upon to provide 3,000hp, much less 6,000hp, continuously.
Fittings and appliances
Flame-proofing
A standard diesel locomotive presents a very low fire risk but flame-proofing can reduce the risk even further. This involves fitting a water-filled box to the exhaust pipe to quench any red-hot carbon particles that may be emitted. Other precautions may include a fully insulated electrical system (neither side earthed to the frame) and all electric wiring enclosed in conduit.
The flameproof diesel locomotive has replaced the fireless steam locomotive in areas of high fire risk such as oil refineries and ammunition dumps. Preserved examples of flameproof diesel locomotives include:
Francis Baily of Thatcham (ex-RAF Welford) at Southall Railway Centre
Naworth (ex-National Coal Board) at the South Tynedale Railway
Latest development of the "Flameproof Diesel Vehicle Applied New Exhaust Gas Dry Type Treatment System" does not need the water supply.
Lights
The lights fitted to diesel locomotives vary from country to country. North American locomotives are fitted with two headlights (for safety in case one malfunctions) and a pair of ditch lights. The latter are fitted low down at the front and are designed to make the locomotive easily visible as it approaches a grade crossing. Older locomotives may be fitted with a Gyralite or Mars Light instead of the ditch lights.
Environmental impact
Although diesel locomotives generally emit less sulphur dioxide, a major pollutant to the environment, and greenhouse gases than steam locomotives, they still emit large amounts. Furthermore, like other diesel powered vehicles, they emit nitrogen oxides and fine particles, which are a risk to public health. In fact, in this last respect diesel locomotives may perform worse than steam locomotives.
For years, it was thought by American government scientists who measure air pollution that diesel locomotive engines were relatively clean and emitted far less health-threatening emissions than those of diesel trucks or other vehicles; however, the scientists discovered that because they used faulty estimates of the amount of fuel consumed by diesel locomotives, they grossly understated the amount of pollution generated annually. After revising their calculations, they concluded that the annual emissions of nitrogen oxide, a major ingredient in smog and acid rain, and soot would be by 2030 nearly twice what they originally assumed. In Europe, where most major railways have been electrified, there is less concern.
This would mean that in the USA diesel locomotives would be releasing more than 800,000tons of nitrogen oxide and 25,000tons of soot every year within a quarter of a century, in contrast to the EPA's previous projections of 480,000tons of nitrogen dioxide and 12,000tons of soot. Since this was discovered, to reduce the effects of the diesel locomotive on humans (who are breathing the noxious emissions) and on plants and animals, it is considered practical to install traps in the diesel engines to reduce pollution levels and other methods of pollution control (e.g., use of biodiesel).
Diesel locomotive pollution has been of particular concern in the city of Chicago. The Chicago Tribune reported levels of diesel soot inside locomotives leaving Chicago at levels hundreds of times above what is normally found on streets outside. Residents of several neighborhoods are most likely exposed to diesel emissions at levels several times higher than the national average for urban areas.
Mitigation
In 2008, the United States Environmental Protection Agency (EPA) mandated regulations requiring all new or refurbished diesel locomotives to meet Tier II pollution standards that slash the amount of allowable soot by 90% and require an 80% reduction in nitrogen oxide emissions. See List of low emissions locomotives.
Other technologies that are being deployed to reduce diesel locomotive emissions and fuel consumption include "Genset" switching locomotives and hybrid Green Goat designs. Genset locomotives use multiple smaller high-speed diesel engines and generators (generator sets), rather than a single medium-speed diesel engine and a single generator. Because of the cost of developing clean engines, these smaller high-speed engines are based on already developed truck engines. Green Goats are a type of hybrid switching locomotive utilizing a small diesel engine and a large bank of rechargeable batteries. Switching locomotives are of particular concern as they typically operate in a limited area, often in or near urban centers, and spend much of their time idling. Both designs reduce pollution below EPA Tier II standards and cut or eliminate emissions during idle.
Advantages over steam
As diesel locomotives advanced, the cost of manufacturing and operating them dropped, and they became cheaper to own and operate than steam locomotives. In North America, steam locomotives were custom-made for specific railway routes, so economies of scale were difficult to achieve. Though more complex to produce with exacting manufacturing tolerances ( for diesel, compared with for steam), diesel locomotive parts were easier to mass-produce. Baldwin Locomotive Works offered almost 500 steam models in its heyday, while EMD offered fewer than ten diesel varieties. In the United Kingdom, British Railways built steam locomotives to standard designs from 1951 onwards. These included standard, interchangeable parts, making them cheaper to produce than the diesel locomotives then available. The capital cost per drawbar horse power was £13 6s (steam), £65 (diesel), £69 7s (turbine) and £17 13s (electric).
Diesel locomotives offer significant operating advantages over steam locomotives. They can safely be operated by one person, making them ideal for switching/shunting duties in yards (although for safety reasons many main-line diesel locomotives continue to have two-person crews: an engineer and a conductor/switchman) and the operating environment is much more attractive, being quieter, fully weatherproof and without the dirt and heat that is an inevitable part of operating a steam locomotive. Diesel locomotives can be worked in multiple with a single crew controlling multiple locomotives in a single train – something not practical with steam locomotives. This brought greater efficiencies to the operator, as individual locomotives could be relatively low-powered for use as a single unit on light duties but marshaled together to provide the power needed on a heavy train. With steam traction, a single very powerful and expensive locomotive was required for the heaviest trains, or the operator resorted to double heading with multiple locomotives and crews, a method which was also expensive and brought with it its own operating difficulties.
Diesel engines can be started and stopped almost instantly, meaning that a diesel locomotive has the potential to incur no fuel costs when not being used. However, it is still the practice of large North American railroads to use straight water as a coolant in diesel engines instead of coolants that incorporate anti-freezing properties; this results in diesel locomotives being left idling when parked in cold climates instead of being completely shut down. A diesel engine can be left idling unattended for hours or even days, especially since practically every diesel engine used in locomotives has systems that automatically shut the engine down if problems such as a loss of oil pressure or coolant loss occur. Automatic start/stop systems are available which monitor coolant and engine temperatures. When the unit is close to having its coolant freeze, the system restarts the diesel engine to warm the coolant and other systems.
Steam locomotives require intensive maintenance, lubrication, and cleaning before, during, and after use. Preparing and firing a steam locomotive for use from cold can take many hours. They can be kept in readiness between uses with a low fire, but this requires regular stoking and frequent attention to maintain the level of water in the boiler. This may be necessary to prevent the water in the boiler freezing in cold climates, so long as the water supply is not frozen. After use a steam locomotive requires a lengthy disposal operation to perform cleaning, inspection, maintenance and refilling with water and fuel before it is ready for its next duty. By contrast, as early as 1939 EMD was promoting its FT Series locomotive as needing no maintenance between 30-day inspections beyond refuelling and basic fluid level and safety checks which could be performed with the prime mover still running. Railways converting from steam to diesel operation in the 1940s and 1950s found that for a given period diesel locomotives were available for, on average, three or four times more revenue-earning hours than equivalent steam locomotives, allowing locomotive fleets to be cut drastically in size while maintaining operational capacity.
The maintenance and operational costs of steam locomotives were much higher than diesels. Annual maintenance costs for steam locomotives accounted for 25% of the initial purchase price. Spare parts were cast from wooden masters for specific locomotives. The sheer number of unique steam locomotives meant that there was no feasible way for spare-part inventories to be maintained. With diesel locomotives spare parts could be mass-produced and held in stock ready for use and many parts and sub-assemblies could be standardized across an operator's fleet using different models of locomotive from the same builder. Modern diesel locomotive engines are designed to allow the power assemblies (systems of working parts and their block interfaces) to be replaced while keeping the main block in the locomotive, which greatly reduces the time that a locomotive is out of revenue-generating service when it requires maintenance.
Steam engines required large quantities of coal and water, which were expensive variable operating costs. Further, the thermal efficiency of steam was considerably less than that of diesel engines. Diesel's theoretical studies demonstrated potential thermal efficiencies for a compression ignition engine of 36% (compared with 6–10% for steam), and an 1897 one-cylinder prototype operated at a remarkable 26% efficiency.
However, one study published in 1959 suggested that many of the comparisons between diesel and steam locomotives were made unfairly, mostly because diesels were a newer technology. After painstaking analysis of financial records and technological progress, the author found that if research had continued on steam technology instead of diesel, there would be negligible financial benefit in converting to diesel locomotion.
By the mid-1960s, diesel locomotives had effectively replaced steam locomotives where electric traction was not in use. Attempts to develop advanced steam technology continue in the 21st century but have not had a significant effect.
| Technology | Rail and cable transport | null |
390066 | https://en.wikipedia.org/wiki/Tokyo%20Metro | Tokyo Metro | The Tokyo Metro (Japanese: , ) is a major rapid transit system in Tokyo, Japan, operated by the Tokyo Metro Co. With an average daily ridership of 6.52 million passengers (as of 2023), the Tokyo Metro is the larger of the two subway operators in the city; the other being the Toei Subway, with 2.85 million average daily rides.
Organization
Tokyo Metro is operated by , a joint-stock company jointly owned by the Government of Japan and the Tokyo Metropolitan Government.
The company, founded as a part of then-Prime Minister Junichiro Koizumi's policy of converting statutory corporations into joint-stock companies, replaced the , commonly known as Eidan or TRTA, on April 1, 2004. TRTA was administered by the Ministry of Land, Infrastructure and Transport, and jointly funded by the national and metropolitan governments. It was formed in 1941 as a part-nationalization of the Tokyo Underground Railway and Tokyo Rapid Railway (now both form the Tokyo Metro Ginza Line), although its oldest lines date back to 1927 with the opening of the Tokyo Underground Railway the same year. Upon its establishment, the TRTA's legal form was a , a form of entity established by the government of the wartime cabinet of the Empire of Japan with both public and private sector investments. Private sector investments to the TRTA were prohibited in 1951 when it was converted into an ordinary statutory corporation. In 2024, the company made its initial public offering, raising $2.3 billion in what became Japan's biggest IPO since 2018.
The other major subway operator is Tokyo Metropolitan Bureau of Transportation (Toei Subway) which is owned solely by the government of Tokyo. Tokyo Metro and Toei trains form completely separate networks, although Tokyo Metro Namboku Line and Toei Mita Line share the same track between Meguro Station and Shirokane-takanawa Station. Users of prepaid rail passes and Suica/Pasmo smart cards can freely interchange between the two networks (as well as other rail companies in the area), but fares are assessed separately for legs on each of these systems and regular ticket holders must purchase a second ticket, or a special transfer ticket, to change from a Toei line to a Tokyo Metro line and vice versa. Though, most Tokyo Metro (and Toei) line offer through service to lines outside of central Tokyo run by other carriers, and this can somewhat complicate the ticketing.
Much effort has been made to make the system accessible to non-Japanese speaking users:
Many train stops are announced in both English and Japanese. Announcements also provide connecting line information.
Ticket machines can switch between English and Japanese user interfaces.
Train stations are signposted in English and Japanese (in kanji and hiragana). There are also numerous signs in Chinese (in simplified characters) and Korean.
Train stations are now also consecutively numbered on each color-coded line, allowing even non-English speakers to be able to commute without necessarily knowing the name of the station. For example, Shinjuku Station on the Marunouchi Line is also signposted as M-08 with a red colored circle surrounding it; even if a commuter could not read the English or Japanese station names on signs or maps, they could simply look for the red line and then find the appropriately numbered station on said line. In addition, some trains have interior LCD displays which display station names in Japanese, English, Chinese, and Korean.
Many stations are also designed to help blind people as railings often have Braille at their base, and raised yellow rubber guide strips are used on flooring throughout the network.
Tokyo Metro stations began accepting contactless (RFID) Pasmo stored value cards in March 2007 to pay fares, and the JR East Suica system is also universally accepted. Both these passes also can be used on surrounding rail systems throughout the area and many rail lines in other areas of Japan. Due to the complexity of the fare systems in Japan, most riders converted to these cards very quickly even though there is an additional charge to issue it.
The Tokyo Metro is extremely punctual and has regular trains arriving 3 to 6 minutes apart most of the day and night. However, it does not run 24 hours a day. While through service with other companies complicates this somewhat, the last train generally starts at midnight and completes its service by 00:45 to 01:00, and the first train generally starts at 05:00.
Tokyo Metro also owns a number of commercial developments which mostly consist of shopping developments at major stations. It also owns the Subway Museum near Kasai Station on the Tokyo Metro Tōzai Line which opened on July 12, 1986, and features a few retired trains which once operated on the Ginza and Marunouchi Lines as well as a maintenance vehicle and some train simulators.
In 2024, Tokyo Metro was listed on the Tokyo Stock Exchange, debuting as the exchange's largest IPO in six years and with a market capitalization of roughly 1 trillion yen. The Government of Japan and the Tokyo Metropolitan Government each sold half of their shares, with the former using the proceeds to repay bonds funding reconstruction after the 2011 Tōhoku earthquake and tsunami.
Overseas affiliates
In 2017, Tokyo Metro opened its affiliate in Hanoi, Vietnam, as part of preparations to be the service operator of Hanoi Metro. The Hanoi Metro opened in 2021.
In November 2024, GTS Rail Operations (a consortium comprising Go-Ahead Group, Tokyo Metro and Sumitomo Corporation) was chosen from four bidders to operate the Elizabeth line in London, england for seven years from May 2025 with an option to extend for two years.
Future expansion
Tokyo Metro indicated in its public share offering that it would cease line construction once the Fukutoshin Line was completed. That line was completed in March 2013 with the opening of the connection with the Tōkyū Tōyoko Line at Shibuya Station, allowing through service as far as Motomachi-Chūkagai Station in Yokohama. There are several lines such as the Hanzōmon Line that still have extensions in their official plans, and in the past, these plans have tended to happen, though often over several decades.
In March 2022, Tokyo Metro received permission to add two new extensions to the network. Under these plans, the Yūrakuchō Line would receive a new branch from Toyosu Station to Sumiyoshi Station with three new stops (including one at Toyocho Station on the Tōzai Line) to better serve the Toyosu urban development zone, and the Namboku Line would receive an extension from Shirokane-Takanawa Station to Shinagawa Station, where it would connect with the Tokaido Shinkansen and the under construction Chūō Shinkansen in addition to serving the surrounding business district. Both extensions are expected to open in the 2030s.
Fares
Pasmo and Suica are accepted on the Tokyo Metro, as well as on railway stations operated by other companies. Transfers between Tokyo Metro subway lines and Toei Subway lines are usually not free, but a discount is given when using the Pasmo or Suica cards to transfer between lines.
Traffic
According to the company, an average of 6.33 million people used the company's nine subway routes each day in 2009. The company made a profit of ¥63.5 billion in 2009.
Lines
Altogether, the Tokyo Metro is made up of nine lines operating on of route.
List of Tokyo Metro lines
Note: Line numbers are for internal usage only and not listed on subway maps.
Note: Excluding the stretch between Wakoshi and Kotake-mukaihara shared with Yurakucho Line.
Through services to other lines
All lines except the Ginza and Marunouchi lines have trains that run through line termini onto tracks owned by other companies.
Namboku Line shares tracks of the section from Meguro to with Toei Mita Line, 2.3 km.
Some of the Tōkyū Tōyoko Line express trains, instead of continuing towards Yokohama/Motomachi-Chūkagai, change course at Hiyoshi for Tōkyū Shin-Yokohama Line and share all of the through services downstream just as Tōkyū Meguro Line.
Stations
There are a total of 180 unique stations (i.e., counting stations served by multiple lines only once) on the Tokyo Metro network. Most stations are located within the 23 special wards and fall inside the Yamanote Line rail loop — some wards such as Setagaya and Ōta have no stations (or only a limited number of stations), as rail service in these areas has historically been provided by the Toei Subway or any of the various .
Major interchange stations, connecting three or more Tokyo Metro lines, include the following:
/
/
Other major stations provide additional connections to other railway operators such as the Toei Subway, JR East, and the various private railways, including (but not limited to) the following:
Depots
Rolling stock
, Tokyo Metro operates a fleet of 2,728 electric multiple unit (EMU) vehicles, the largest fleet for a private railway operator in Japan.
600 V third rail / 1,435 mm gauge lines
1000 series – Ginza Line
2000 series – Marunouchi Line
1,500 V overhead / 1,067 mm gauge lines
05 series – Tōzai Line
07 series – Tōzai Line
08 series – Hanzōmon Line
8000 series – Hanzōmon Line
9000 series – Namboku Line
10000 series – Yūrakuchō Line, Fukutoshin Line
13000 series – Hibiya Line
15000 series – Tōzai Line
16000 series – Chiyoda Line
17000 series – Yūrakuchō Line, Fukutoshin Line
18000 series – Hanzōmon Line
Trains from other operators are also used on Tokyo Metro lines as a consequence of inter-running services.
Overcrowding
As is common with rail transport in Tokyo, Tokyo Metro trains are severely crowded during peak periods. During the morning peak period, platform attendants (oshiya) are sometimes needed to push riders and their belongings into train cars so that the doors can close. On some Tokyo Metro lines, the first or last car of a train is reserved for women during peak hours.
Network map
| Technology | Japan | null |
5822369 | https://en.wikipedia.org/wiki/Strobilus | Strobilus | A strobilus (: strobili) is a structure present on many land plant species consisting of sporangia-bearing structures densely aggregated along a stem. Strobili are often called cones, but some botanists restrict the use of the term cone to the woody seed strobili of conifers. Strobili are characterized by a central axis (anatomically a stem) surrounded by spirally arranged or decussate structures that may be modified leaves or modified stems.
Leaves that bear sporangia are called sporophylls, while sporangia-bearing stems are called sporangiophores.
Lycophytes
Some members of both of the two modern classes of Lycopodiophyta (Lycopodiopsida and Isoetopsida) produce strobili. In all cases, the lateral organs of the strobilus are microphylls, bearing sporangia. In other lycophytes, ordinary foliage leaves can act as sporophylls, and there are no organized strobili.
Sphenophytes
The single extant genus of Equisetophyta, Equisetum, produces strobili in which the lateral organs are sporangiophores. Developmental evidence and comparison with fossil members of the group show that the sporangiophores are reduced stems, rather than leaves. Sporangia are terminal.
Seed plants
With the exception of flowering plants, seed plants produce ovules and pollen in different structures. Strobili bearing microsporangia are called microsporangiate strobili or pollen cones, and those bearing ovules are megasporangiate strobili or seed cones (or ovulate cones).
Cycads
Cycadophyta are typically dioecious (seed strobili and pollen strobili are produced on separate plants). The lateral organs of seed strobili are megasporophylls (modified leaves) that bear two to several marginal ovules. Pollen strobili consist of microsporophylls, each of which may have dozens or hundreds of abaxial microsporangia.
Ginkgos
The single living member of the Ginkgophyta, Ginkgo biloba produces pollen strobili, but the ovules are typically borne in pairs at the end of a stem, not in a strobilus. When there are more than a pair of ovules in G. biloba, however, or when fossil taxa bearing large numbers of ovules are examined, it is clear that the paired ovules in the extant species are a highly reduced strobilus.
Conifers
Pollen strobili of Pinophyta are similar to those of cycads (although much smaller) and Ginkgoes in that they are composed of microsporophylls with microsporangia on the abaxial surface. Seed cones of many conifers are compound strobili. The central stem produces bracts and in the axil of each bract is a cone scale. Morphologically the cone scale is a reduced stem. Ovules are produced on the adaxial surface of the cone scales.
Gnetophytes
Gnetophyta consists of three genera, Ephedra, Gnetum and Welwitschia. All three are typically dioecious, although some Ephedra species exhibit monoecy. In contrast to the conifers, which have simple pollen strobili and compound seed strobili, gnetophytes have both compound pollen and seed strobili. The seed strobili of Ephedra and Gnetum are reduced, with Ephedra producing only two ovules per strobilus and Gnetum a single ovule.
Flowering plants
The flower of flowering plants is sometimes referred to as a bisexual strobilus. Stamens include microsporangia within the anther, and ovules (contained in carpels) contain megasporangia. Magnolia has a particularly strobiloid flower with all parts arranged in a spiral, rather than as clear whorls.
A number of flowering plants have inflorescences that resemble strobili, such as catkins, but are actually more complex in structure than strobili.
Evolution
It is likely that strobili evolved independently in most if not all these groups. This evolutionary convergence is not unusual, since the form of a strobilus is one of the most compact that can be achieved in arranging lateral organs around a cylindric axis, and the consolidation of reproductive parts in a strobilus may optimize spore dispersal and nutrient partitioning.
Etymology
The word strobilus is related to the ancient Greek strobilos = whirlwind. The Hebrew word for conifer cone, itstrubal, is an ancient borrowing from the Greek.
According to Liddell & Scott, the Greek: strobilos (στρόβιλος) had many meanings, generally of anything twisted up...hence of the hedgehog,... of an egg-shell,... as a name of various twisted or spinning objects. For example:
1. a kind of seasnail...
2. a top...
3. a whirlpool, a whirlwind which spins upwards...
6. the cone of the fir or pine, fir-apple, pine-cone,… also of the tree itself.
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
7604906 | https://en.wikipedia.org/wiki/Mantle%20convection | Mantle convection | Mantle convection is the very slow creep of Earth's solid silicate mantle as convection currents carry heat from the interior to the planet's surface. Mantle convection causes tectonic plates to move around the Earth's surface.
The Earth's lithosphere rides atop the asthenosphere, and the two form the components of the upper mantle. The lithosphere is divided into tectonic plates that are continuously being created or consumed at plate boundaries. Accretion occurs as mantle is added to the growing edges of a plate, associated with seafloor spreading. Upwelling beneath the spreading centers is a shallow, rising component of mantle convection and in most cases not directly linked to the global mantle upwelling. The hot material added at spreading centers cools down by conduction and convection of heat as it moves away from the spreading centers. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction usually at an oceanic trench. Subduction is the descending component of mantle convection.
This subducted material sinks through the Earth's interior. Some subducted material appears to reach the lower mantle, while in other regions this material is impeded from sinking further, possibly due to a phase transition from spinel to silicate perovskite and magnesiowustite, an endothermic reaction.
The subducted oceanic crust triggers volcanism, although the basic mechanisms are varied. Volcanism may occur due to processes that add buoyancy to partially melted mantle, which would cause upward flow of the partial melt as it decreases in density. Secondary convection may cause surface volcanism as a consequence of intraplate extension and mantle plumes. In 1993 it was suggested that inhomogeneities in D" layer have some impact on mantle convection.
Types of convection
During the late 20th century, there was significant debate within the geophysics community as to whether convection is likely to be "layered" or "whole". Although elements of this debate still continue, results from seismic tomography, numerical simulations of mantle convection and examination of Earth's gravitational field are all beginning to suggest the existence of whole mantle convection, at least at the present time. In this model, cold subducting oceanic lithosphere descends all the way from the surface to the core–mantle boundary (CMB), and hot plumes rise from the CMB all the way to the surface. This model is strongly based on the results of global seismic tomography models, which typically show slab and plume-like anomalies crossing the mantle transition zone.
Although it is accepted that subducting slabs cross the mantle transition zone and descend into the lower mantle, debate about the existence and continuity of plumes persists, with important implications for the style of mantle convection. This debate is linked to the controversy regarding whether intraplate volcanism is caused by shallow, upper mantle processes or by plumes from the lower mantle.
Many geochemistry studies have argued that the lavas erupted in intraplate areas are different in composition from shallow-derived mid-ocean ridge basalts. Specifically, they typically have elevated helium-3 : helium-4 ratios. Being a primordial nuclide, helium-3 is not naturally produced on Earth. It also quickly escapes from Earth's atmosphere when erupted. The elevated He-3:He-4 ratio of ocean island basalts suggest that they must be sourced from a part of the Earth that has not previously been melted and reprocessed in the same way as mid-ocean ridge basalts have been. This has been interpreted as their originating from a different less well-mixed region, suggested to be the lower mantle. Others, however, have pointed out that geochemical differences could indicate the inclusion of a small component of near-surface material from the lithosphere.
Planform and vigour of convection
On Earth, the Rayleigh number for convection within Earth's mantle is estimated to be of order 107, which indicates vigorous convection. This value corresponds to whole mantle convection (i.e. convection extending from the Earth's surface to the border with the core). On a global scale, surface expression of this convection is the tectonic plate motions and therefore has speeds of a few cm per year. Speeds can be faster for small-scale convection occurring in low viscosity regions beneath the lithosphere, and slower in the lowermost mantle where viscosities are larger. A single shallow convection cycle takes on the order of 50 million years, though deeper convection can be closer to 200 million years.
Currently, whole mantle convection is thought to include broad-scale downwelling beneath the Americas and the western Pacific, both regions with a long history of subduction, and upwelling flow beneath the central Pacific and Africa, both of which exhibit dynamic topography consistent with upwelling. This broad-scale pattern of flow is also consistent with the tectonic plate motions, which are the surface expression of convection in the Earth's mantle and currently indicate convergence toward the western Pacific and the Americas, and divergence away from the central Pacific and Africa. The persistence of net tectonic divergence away from Africa and the Pacific for the past 250 myr indicates the long-term stability of this general mantle flow pattern and is consistent with other studies that suggest long-term stability of the large low-shear-velocity provinces of the lowermost mantle that form the base of these upwellings.
Creep in the mantle
Due to the varying temperatures and pressures between the lower and upper mantle, a variety of creep processes can occur, with dislocation creep dominating in the lower mantle and diffusional creep occasionally dominating in the upper mantle. However, there is a large transition region in creep processes between the upper and lower mantle, and even within each section creep properties can change strongly with location and thus temperature and pressure.
Since the upper mantle is primarily composed of olivine ((Mg,Fe)2SiO4), the rheological characteristics of the upper mantle are largely those of olivine. The strength of olivine is proportional to its melting temperature, and is also very sensitive to water and silica content. The solidus depression by impurities, primarily Ca, Al, and Na, and pressure affects creep behavior and thus contributes to the change in creep mechanisms with location. While creep behavior is generally plotted as homologous temperature versus stress, in the case of the mantle it is often more useful to look at the pressure dependence of stress. Though stress is simply force over area, defining the area is difficult in geology. Equation 1 demonstrates the pressure dependence of stress. Since it is very difficult to simulate the high pressures in the mantle (1MPa at 300–400 km), the low pressure laboratory data is usually extrapolated to high pressures by applying creep concepts from metallurgy.
Most of the mantle has homologous temperatures of 0.65–0.75 and experiences strain rates of per second. Stresses in the mantle are dependent on density, gravity, thermal expansion coefficients, temperature differences driving convection, and the distance over which convection occurs—all of which give stresses around a fraction of 3-30MPa.
Due to the large grain sizes (at low stresses as high as several mm), it is unlikely that Nabarro-Herring (NH) creep dominates; dislocation creep tends to dominate instead. 14 MPa is the stress below which diffusional creep dominates and above which power law creep dominates at 0.5Tm of olivine. Thus, even for relatively low temperatures, the stress diffusional creep would operate at is too low for realistic conditions. Though the power law creep rate increases with increasing water content due to weakening (reducing activation energy of diffusion and thus increasing the NH creep rate) NH is generally still not large enough to dominate. Nevertheless, diffusional creep can dominate in very cold or deep parts of the upper mantle.
Additional deformation in the mantle can be attributed to transformation enhanced ductility. Below 400 km, the olivine undergoes a pressure-induced phase transformation, which can cause more deformation due to the increased ductility. Further evidence for the dominance of power law creep comes from preferred lattice orientations as a result of deformation. Under dislocation creep, crystal structures reorient into lower stress orientations. This does not happen under diffusional creep, thus observation of preferred orientations in samples lends credence to the dominance of dislocation creep.
Mantle convection in other celestial bodies
A similar process of slow convection probably occurs (or occurred) in the interiors of other planets (e.g., Venus, Mars) and some satellites (e.g., Io, Europa, Enceladus).
| Physical sciences | Geophysics | Earth science |
195891 | https://en.wikipedia.org/wiki/Pneumatics | Pneumatics | Pneumatics (from Greek 'wind, breath') is the use of gas or pressurized air in mechanical systems.
Pneumatic systems used in industry are commonly powered by compressed air or compressed inert gases. A centrally located and electrically-powered compressor powers cylinders, air motors, pneumatic actuators, and other pneumatic devices. A pneumatic system controlled through manual or automatic solenoid valves is selected when it provides a lower cost, more flexible, or safer alternative to electric motors, and hydraulic actuators.
Pneumatics also has applications in dentistry, construction, mining, and other areas.
Gases used in pneumatic systems
Pneumatic systems in fixed installations, such as factories, use compressed air because a sustainable supply can be made by compressing atmospheric air. The air usually has moisture removed, and a small quantity of oil is added at the compressor to prevent corrosion and lubricate mechanical components.
Factory-plumbed pneumatic-power users need not worry about poisonous leakage, as the gas is usually just air. Any compressed gas other than air is an asphyxiation hazard—including nitrogen, which makes up 78% of air. Compressed oxygen (approx. 21% of air) would not asphyxiate, but is not used in pneumatically-powered devices because it is a fire hazard, more expensive, and offers no performance advantage over air. Smaller or stand-alone systems can use other compressed gases that present an asphyxiation hazard, such as nitrogen—often referred to as OFN (oxygen-free nitrogen) when supplied in cylinders.
Portable pneumatic tools and small vehicles, such as Robot Wars machines and other hobbyist applications are often powered by compressed carbon dioxide, because containers designed to hold it such as SodaStream canisters and fire extinguishers are readily available, and the phase change between liquid and gas makes it possible to obtain a larger volume of compressed gas from a lighter container than compressed air requires. Carbon dioxide is an asphyxiant and can be a freezing hazard if vented improperly.
History
Although the early history of pneumatics is murky, the field's founder is traditionally traced back to Ctesibius of Alexandria "who worked in the early 3rd century BCE and invented a number of mechanical toys operated by air, water, and steam under pressure." Though no documents written by Ctesibius survive, he is thought to have heavily influenced Philo of Byzantium while writing his work, Mechanical Syntaxis, as well as Vitruvius in . In the first century BC, the ancient Greek mathematician Hero of Alexandria compiled recipes for dozens of contraptions in his work, Pneumatics. It has been speculated that much of this work can be attributed to Ctesibius. The pneumatic experiments described in these ancient documents later inspired the Renaissance inventors of the thermoscope and the air thermometer, devices which relied upon the heating and cooling of air to move a column of water up and down a tube.
German physicist Otto von Guericke (1602-1686) invented the vacuum pump, a device that can draw out air or gas from the attached vessel. He demonstrated the vacuum pump to separate the pairs of copper hemispheres using air pressures. The field of pneumatics has changed considerably over the years. It has moved from small handheld devices to large machines with multiple parts that serve different functions.
Comparison to hydraulics
Both pneumatics and hydraulics are applications of fluid power. Pneumatics uses an easily compressible gas such as air or a suitable pure gas—while hydraulics uses relatively incompressible liquid media such as oil. Most industrial pneumatic applications use pressures of about . Hydraulics applications commonly use from , but specialized applications may exceed .
Advantages of pneumatics
Simplicity of design and control—Machines are easily designed using standard cylinders and other components, and operate via simple on-off control.
Reliability—Pneumatic systems generally have long operating lives and require little maintenance. Because gas is compressible, equipment is less subject to shock damage. Gas absorbs excessive force, whereas fluid in hydraulics directly transfers force. Compressed gas can be stored, so machines still run for a while if electrical power is lost.
Safety—There is a very low chance of fire compared to hydraulic oil. New machines are usually overload safe to a certain limit.
Advantages of hydraulics
Fluid does not absorb any of the supplied energy.
Capable of moving much higher loads and providing much lower forces due to the incompressibility.
The hydraulic working fluid is practically incompressible, leading to a minimum of spring action. When hydraulic fluid flow is stopped, the slightest motion of the load releases the pressure on the load; there is no need to "bleed off" pressurized air to release the pressure on the load.
Highly responsive compared to pneumatics.
Supply more power than pneumatics.
Can also do many purposes at one time: lubrication, cooling and power transmission.
Pneumatic logic
Pneumatic logic systems (sometimes called air logic control) are sometimes used for controlling industrial processes, consisting of primary logic units like:
And units
Or units
Relay or booster units
Latching units
Timer units
Fluidics amplifiers with no moving parts other than the air itself
Pneumatic logic is a reliable and functional control method for industrial processes. In recent years, these systems have largely been replaced by electronic control systems in new installations because of the smaller size, lower cost, greater precision, and more powerful features of digital controls. Pneumatic devices are still used where upgrade cost, or safety factors dominate.
Examples of pneumatic systems and components
Air brakes on buses and trucks
Air brakes on trains
Air compressors
Air engines for pneumatically powered vehicles
Barostat systems used in neurogastroenterology and for researching electricity
Cable jetting, a way to install cables in ducts
Dental drill
Compressed-air engine and compressed-air vehicles
Gas Chromatography
Gas-operated reloading
Holman Projector, a pneumatic anti-aircraft weapon
HVAC control systems
Inflatable structures
Lego pneumatics can be used to build pneumatic models
Pipe organ
Electro-pneumatic action
Tubular-pneumatic action
Player piano
Pneumatic actuator
Pneumatic air guns
Pneumatic bladder
Pneumatic cylinder
Pneumatic launchers, a type of spud gun
Pneumatic mail systems
Pneumatic motor
Pneumatic tire
Pneumatic tools:
Jackhammer used by road workers
Pneumatic nailgun
Pressure regulator
Pressure sensor
Pressure switch
Launched roller coaster
Vacuum pump
Vacuum sewer
| Technology | Hydraulics and pneumatics | null |
195947 | https://en.wikipedia.org/wiki/Function%20composition | Function composition | In mathematics, the composition operator takes two functions, and , and returns a new function . Thus, the function is applied after applying to .
Reverse composition, sometimes denoted , applies the operation in the opposite order, applying first and second. Intuitively, reverse composition is a chaining process in which the output of function feeds the input of function .
The composition of functions is a special case of the composition of relations, sometimes also denoted by . As a result, all properties of composition of relations are true of composition of functions, such as associativity.
Examples
Composition of functions on a finite set: If , and , then , as shown in the figure.
Composition of functions on an infinite set: If (where is the set of all real numbers) is given by and is given by , then:
If an airplane's altitude at time is , and the air pressure at altitude is , then is the pressure around the plane at time .
Function defined on finite sets which change the order of their elements such as permutations can be composed on the same set, this being composition of permutations.
Properties
The composition of functions is always associative—a property inherited from the composition of relations. That is, if , , and are composable, then . Since the parentheses do not change the result, they are generally omitted.
In a strict sense, the composition is only meaningful if the codomain of equals the domain of ; in a wider sense, it is sufficient that the former be an improper subset of the latter.
Moreover, it is often convenient to tacitly restrict the domain of , such that produces only values in the domain of . For example, the composition of the functions defined by and defined by can be defined on the interval .
The functions and are said to commute with each other if . Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, only when . The picture shows another example.
The composition of one-to-one (injective) functions is always one-to-one. Similarly, the composition of onto (surjective) functions is always onto. It follows that the composition of two bijections is also a bijection. The inverse function of a composition (assumed invertible) has the property that .
Derivatives of compositions involving differentiable functions can be found using the chain rule. Higher derivatives of such functions are given by Faà di Bruno's formula.
Composition of functions is sometimes described as a kind of multiplication on a function space, but has very different properties from pointwise multiplication of functions (e.g. composition is not commutative).
Composition monoids
Suppose one has two (or more) functions having the same domain and codomain; these are often called transformations. Then one can form chains of transformations composed together, such as . Such chains have the algebraic structure of a monoid, called a transformation monoid or (much more seldom) a composition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is the de Rham curve. The set of all functions is called the full transformation semigroup or symmetric semigroup on . (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.)
If the transformations are bijective (and thus invertible), then the set of all possible combinations of these functions forms a transformation group; and one says that the group is generated by these functions. A fundamental result in group theory, Cayley's theorem, essentially says that any group is in fact just a subgroup of a permutation group (up to isomorphism).
The set of all bijective functions (called permutations) forms a group with respect to function composition. This is the symmetric group, also sometimes called the composition group.
In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is a regular semigroup.
Functional powers
If , then may compose with itself; this is sometimes denoted as . That is:
More generally, for any natural number , the th functional power can be defined inductively by , a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel. Repeated composition of such a function with itself is called function iteration.
By convention, is defined as the identity map on 's domain, .
If and admits an inverse function , negative functional powers are defined for as the negated power of the inverse function: .
Note: If takes its values in a ring (in particular for real or complex-valued ), there is a risk of confusion, as could also stand for the -fold product of , e.g. . For trigonometric functions, usually the latter is meant, at least for positive exponents. For example, in trigonometry, this superscript notation represents standard exponentiation when used with trigonometric functions:
.
However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g., .
In some cases, when, for a given function , the equation has a unique solution , that function can be defined as the functional square root of , then written as .
More generally, when has a unique solution for some natural number , then can be defined as .
Under additional restrictions, this idea can be generalized so that the iteration count becomes a continuous parameter; in this case, such a system is called a flow, specified through solutions of Schröder's equation. Iterated functions and flows occur naturally in the study of fractals and dynamical systems.
To avoid ambiguity, some mathematicians choose to use to denote the compositional meaning, writing for the -th iterate of the function , as in, for example, meaning . For the same purpose, was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested instead.
Alternative notations
Many mathematicians, particularly in group theory, omit the composition symbol, writing for .
During the mid-20th century, some mathematicians adopted postfix notation, writing for and for . This can be more natural than prefix notation in many cases, such as in linear algebra when is a row vector and and denote matrices and the composition is by matrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence.
Mathematicians who use postfix notation may write "", meaning first apply and then apply , in keeping with the order the symbols occur in postfix notation, thus making the notation "" ambiguous. Computer scientists may write "" for this, thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in the Z notation the ⨾ character is used for left relation composition. Since all functions are binary relations, it is correct to use the [fat] semicolon for function composition as well (see the article on composition of relations for further details on this notation).
Composition operator
Given a function , the composition operator is defined as that operator which maps functions to functions as Composition operators are studied in the field of operator theory.
In programming languages
Function composition appears in one form or another in numerous programming languages.
Multivariate functions
Partial composition is possible for multivariate functions. The function resulting when some argument of the function is replaced by the function is called a composition of and in some computer engineering contexts, and is denoted
When is a simple constant , composition degenerates into a (partial) valuation, whose result is also known as restriction or co-factor.
In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition of primitive recursive function. Given , a -ary function, and -ary functions , the composition of with , is the -ary function
This is sometimes called the generalized composite or superposition of f with . The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosen projection functions. Here can be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition.
A set of finitary operations on some base set X is called a clone if it contains all projections and is closed under generalized composition. A clone generally contains operations of various arities. The notion of commutation also finds an interesting generalization in the multivariate case; a function f of arity n is said to commute with a function g of arity m if f is a homomorphism preserving g, and vice versa, that is:
A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is called medial or entropic.
Generalizations
Composition can be generalized to arbitrary binary relations.
If and are two binary relations, then their composition amounts to
.
Considering a function as a special case of a binary relation (namely functional relations), function composition satisfies the definition for relation composition. A small circle has been used for the infix notation of composition of relations, as well as functions. When used to represent composition of functions however, the text sequence is reversed to illustrate the different operation sequences accordingly.
The composition is defined in the same way for partial functions and Cayley's theorem has its analogue called the Wagner–Preston theorem.
The category of sets with functions as morphisms is the prototypical category. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition. The structures given by composition are axiomatized and generalized in category theory with the concept of morphism as the category-theoretical replacement of functions. The reversed order of composition in the formula applies for composition of relations using converse relations, and thus in group theory. These structures form dagger categories.The standard "foundation" for mathematics starts with sets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions.
. . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms (like functions) form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics.
- Saunders Mac Lane, Mathematics: Form and Function
Typography
The composition symbol is encoded as ; see the Degree symbol article for similar-appearing Unicode characters. In TeX, it is written \circ.
| Mathematics | Basics | null |
195952 | https://en.wikipedia.org/wiki/Catamaran | Catamaran | A catamaran () (informally, a "cat") is a watercraft with two parallel hulls of equal size. The wide distance between a catamaran's hulls imparts stability through resistance to rolling and overturning; no ballast is required. Catamarans typically have less hull volume, smaller displacement, and shallower draft (draught) than monohulls of comparable length. The two hulls combined also often have a smaller hydrodynamic resistance than comparable monohulls, requiring less propulsive power from either sails or motors. The catamaran's wider stance on the water can reduce both heeling and wave-induced motion, as compared with a monohull, and can give reduced wakes.
Catamarans were invented by the Austronesian peoples, and enabled their expansion to the islands of the Indian and Pacific Oceans.
Catamarans range in size from small sailing or rowing vessels to large naval ships and roll-on/roll-off car ferries. The structure connecting a catamaran's two hulls ranges from a simple frame strung with webbing to support the crew to a bridging superstructure incorporating extensive cabin or cargo space.
History
Catamarans from Oceania and Maritime Southeast Asia became the inspiration for modern catamarans. Until the 20th century catamaran development focused primarily on sail-driven concepts.
Etymology
The word "catamaran" is derived from the Tamil word, kattumaram (கட்டுமரம்), which means "logs bound together" and is a type of single-hulled raft made of three to seven tree trunks lashed together. The term has evolved in English usage to refer to unrelated twin-hulled vessels.
Development in Austronesia
Catamaran-type vessels were an early technology of the Austronesian peoples. Early researchers like Heine-Geldern (1932) and Hornell (1943) once believed that catamarans evolved from outrigger canoes, but modern authors specializing in Austronesian cultures like Doran (1981) and Mahdi (1988) now believe it to be the opposite.
Two canoes bound together developed directly from minimal raft technologies of two logs tied together. Over time, the twin-hulled canoe form developed into the asymmetric double canoe, where one hull is smaller than the other. Eventually the smaller hull became the prototype outrigger, giving way to the single outrigger canoe, then to the reversible single outrigger canoe. Finally, the single outrigger types developed into the double outrigger canoe (or trimarans).
This would also explain why older Austronesian populations in Island Southeast Asia tend to favor double outrigger canoes, as it keeps the boats stable when tacking. But they still have small regions where catamarans and single-outrigger canoes are still used. In contrast, more distant outlying descendant populations in Oceania, Madagascar, and the Comoros, retained the twin-hull and the single outrigger canoe types, but the technology for double outriggers never reached them (although it exists in western Melanesia). To deal with the problem of the instability of the boat when the outrigger faces leeward when tacking, they instead developed the shunting technique in sailing, in conjunction with reversible single-outriggers.
Despite their being the more "primitive form" of outrigger canoes, they were nonetheless effective, allowing seafaring Polynesians to voyage to distant Pacific islands.
Traditional catamarans
The following is a list of traditional Austronesian catamarans:
Island Melanesia:
Fiji: Drua (or waqa tabu)
Papua New Guinea: Lakatoi
Tonga: Hamatafua, kalia, tongiaki
Polynesia
Cook Islands: Vaka katea
Hawaii: Waʻa kaulua
Marquesas: Vaka touʻua
New Zealand: Waka hourua
Samoa: ʻAlia, amatasi, va'a-tele
Society Islands: Pahi, tipairua
Western development of sailing catamarans
The first documented example of twin-hulled sailing craft in Europe was designed by William Petty in 1662 to sail faster, in shallower waters, in lighter wind, and with fewer crew than other vessels of the time. However, the unusual design met with skepticism and was not a commercial success.
The design remained relatively unused in the West for almost 160 years until the early 19th-century, when the Englishman Mayflower F. Crisp built a two-hulled merchant ship in Rangoon, Burma. The ship was christened Original. Crisp described it as "a fast sailing fine sea boat; she traded during the monsoon between Rangoon and the Tenasserim Provinces for several years".
Later that century, the American Nathanael Herreshoff constructed a twin-hulled sailing boat of his own design (US Pat. No. 189,459). The craft, Amaryllis, raced at her maiden regatta on June 22, 1876, and performed exceedingly well. Her debut demonstrated the distinct performance advantages afforded to catamarans over the standard monohulls. It was as a result of this event, the Centennial Regatta of the New York Yacht Club, that catamarans were barred from regular sailing classes, and this remained the case until the 1970s. On June 6, 1882, three catamarans from the Southern Yacht Club of New Orleans raced a 15 nm course on Lake Pontchartrain and the winning boat in the catamaran class, Nip and Tuck, beat the fastest sloop's time by over five minutes.
In 1916, Leonardo Torres Quevedo patented a multihull steel vessel named Binave (Twin Ship), a new type of catamaran which was constructed and tested in Bilbao (Spain) in 1918. The innovative design included two 30 HP Hispano-Suiza marine engines and could modify its configuration when sailing, positioning two rudders at the stern of each float, with the propellers also placed aft. In 1936, Eric de Bisschop built a Polynesian "double canoe" in Hawaii and sailed it home to a hero's welcome in France. In 1939, he published his experiences in a book, Kaimiloa, which was translated into English in 1940.
Roland and Francis Prout experimented with catamarans in 1949 and converted their 1935 boat factory in Canvey Island, Essex (England), to catamaran production in 1954. Their Shearwater catamarans easily won races against monohulls. Yellow Bird, a 1956-built Shearwater III, raced successfully by Francis Prout in the 1960s, is in the collection of the National Maritime Museum Cornwall. Prout Catamarans, Ltd. designed a mast aft rig with the mast aft of midships to support an enlarged jib—more than twice the size of the design's reduced mainsail; it was produced as the Snowgoose model. The claimed advantage of this sail plan was to diminish any tendency for the bows of the vessel to dig in.
In the mid-twentieth century, beachcats became a widespread category of sailing catamarans, owing to their ease of launching and mass production. In California, a maker of surfboards, Hobie Alter, produced the Hobie 14 in 1967, and two years later the larger and even more successful Hobie 16. As of 2016, the Hobie 16 was still being produced with more than 100,000 having been manufactured.
Catamarans were introduced to Olympic sailing in 1976. The two-handed Tornado catamaran was selected for the multihull discipline in the Olympic Games from 1976 through 2008. It was redesigned in 2000. The foiling Nacra 17 was used in the Tokyo 2020 Olympics, which were held in 2021; after the 2015 adoption of the Nacra 15 as a Youth World Championships class and as a new class for the Youth Olympic Games.
Performance
Catamarans have two distinct primary performance characteristics that distinguish them from displacement monohull vessels: lower resistance to passage through the water and greater stability (initial resistance to capsize). Choosing between a monohull and catamaran configuration includes considerations of carrying capacity, speed, and efficiency.
Resistance
At low to moderate speeds, a lightweight catamaran hull experiences resistance to passage through water that is approximately proportional to its speed. A displacement monohull has the same relationship at low speed since resistance is almost entirely due to surface friction. When boat speed increases and waves are generated the resistance is dependent on several design factors, particularly hull displacement to length and hull separation to length ratio, it is a non trivial resistance curve with many small peaks as wave trains at various speeds combine and cancel For powered catamarans, this implies smaller power plants (although two are typically required). For sailing catamarans, low forward resistance allows the sails to derive power from attached flow, their most efficient mode—analogous to a wing—leading to the use of wingsails in racing craft.
Stability
Catamarans rely primarily on form stability to resist heeling and capsize. Comparison of heeling stability of a rectangular-cross section monohull of beam, B, compared with two catamaran hulls of width B/2, separated by a distance, 2×B, determines that the catamaran has an initial resistance to heeling that is seven times that of the monohull. Compared with a monohull, a cruising catamaran sailboat has a high initial resistance to heeling and capsize—a fifty-footer requires four times the force to initiate a capsize than an equivalent monohull.
Tradeoffs
One measure of the trade-off between speed and carrying capacity is the displacement Froude number (FnV), compared with calm water transportation efficiency. FnV applies when the waterline length is too speed-dependent to be meaningful—as with a planing hull. It uses a reference length, the cubic root of the volumetric displacement of the hull, V, where u is the relative flow velocity between the sea and ship, and g is acceleration due to gravity:
Calm water transportation efficiency of a vessel is proportional to the full-load displacement and the maximum calm-water speed, divided by the corresponding power required.
Large merchant vessels have a FnV between one and zero, whereas higher-performance powered catamarans may approach 2.5, denoting a higher speed per unit volume for catamarans. Each type of vessel has a corresponding calm water transportation efficiency, with large transport ships being in the range of 100–1,000, compared with 11-18 for transport catamarans, denoting a higher efficiency per unit of payload for monohulls.
SWATH and wave-piercing designs
Two advances over the traditional catamaran are the small-waterplane-area twin hull (SWATH) and the wave-piercing configuration—the latter having become a widely favored design.
SWATH reduces wave-generating resistance by moving displacement volume below the waterline, using a pair of tubular, submarine-like hulls, connected by pylons to the bridge deck with a narrow waterline cross-section. The submerged hulls are minimally affected by waves. The SWATH form was invented by Canadian Frederick G. Creed, who presented his idea in 1938 and was later awarded a British patent for it in 1946. It was first used in the 1960s and 1970s as an evolution of catamaran design for use as oceanographic research vessels or submarine rescue ships. In 1990, the US Navy commissioned the construction of a SWATH ship to test the configuration.
SWATH vessels compare with conventional powered catamarans of equivalent size, as follows:
Larger wetted surface, which causes higher skin friction drag
Significant reduction in wave-induced drag, with the configuration of struts and submerged hull structures
Lower water plane area significantly reduces pitching and heaving in a seaway
No possibility of planing
Higher sensitivity to loading, which may bring the bridge structure closer to the water
Wave-piercing catamarans (strictly speaking they are trimarans, with a central hull and two outriggers) employ a low-buoyancy bow on each hull that is pointed at the water line and rises aft, up to a level, to allow each hull to pierce waves, rather than ride over them. This allows higher speeds through waves than for a conventional catamaran. They are distinguished from SWATH catamarans, in that the buoyant part of the hull is not tubular. The spanning bridge deck may be configured with some of the characteristics of a normal V-hull, which allows it to penetrate the crests of waves.
Wave-piercing catamaran designs have been employed for yachts, passenger ferries, and military vessels.
Applications
A catamaran configuration fills a niche where speed and sea-kindliness is favored over bulk capacity. In larger vessels, this niche favors car ferries and military vessels for patrol or operation in the littoral zone.
Sport
Recreational and sport catamarans typically are designed to have a crew of two and be launched and landed from a beach. Most have a trampoline on the bridging structure, a rotating mast and full-length battens on the mainsail. Performance versions often have trapezes to allow the crew to hike out and counterbalance capsize forces during strong winds on certain points of sail.
For the 33rd America's Cup, both the defender and the challenger built long multihulls. Société Nautique de Genève, defending with team Alinghi, sailed a catamaran. The challenger, BMW Oracle Racing, used a trimaran, replacing its soft sail rig with a towering wing sail—the largest sailing wing ever built. In the waters off Valencia, Spain in February 2010, the BMW Oracle Racing trimaran with its powerful wing sail proved to be superior. This represented a break from the traditional monohulls that had always been sailed in previous America's Cup series.
On San Francisco Bay, the 2013 America's Cup was sailed in long AC72 catamarans (craft set by the rules for the 2013 America's Cup). Each yacht employed hydrofoils and a wing sail. The regatta was won 9–8 by Oracle Team USA against the challenger, Emirates Team New Zealand, in fifteen matches because Oracle Team USA had started the regatta with a two-point penalty.
Yachting has seen the development of multihulls over in length. "The Race" helped precipitate this trend; it was a circumnavigation challenge which departed from Barcelona, Spain, on New Year's Eve, 2000. Because of the prize money and prestige associated with this event, four new catamarans (and two highly modified ones) over in length were built to compete. The largest, PlayStation, owned by Steve Fossett, was long and had a mast which was above the water. Virtually all of the new mega-cats were built of pre-preg carbon fiber for strength and the lowest possible weight. The top speeds of these boats can approach . The Race was won by the -long catamaran Club Med skippered by Grant Dalton. It went round the globe in 62 days at an average speed of .
Whitewater catamaran—sometimes called "cata-rafts"—for whitewater sports are widely spread in post-Soviet countries. They consists of two inflatable hulls connected with a lattice scaffold. The frame of the tourist catamaran can be made of both aluminum (duralumin) pipes and from felled tree trunks. The inflatable part has two layers—an airtight balloon with inflation holes and a shell made of dense tissue, protecting the balloon from mechanical damage. Advantages of such catamarans are light weight, compactness and convenience in transportation (the whole product is packed in one pack-backpack, suitable for air traffic standards) and the speed of assembly (10–15 minutes for the inflation). All-inflatable models are available in North America. A cata-raft design has been used on the Colorado River to handle heavy whitewater, yet maintain a good speed through the water.
Cruising
Cruising sailors must make trade-offs among volume, useful load, speed, and cost in choosing a boat. Choosing a catamaran offers increased speed at the expense of reduced load per unit of cost. Howard and Doane describe the following tradeoffs between cruising monohulls and catamarans: A long-distance, offshore cruising monohull may be as short as for a given crew complement and supporting supplies, whereas a cruising catamaran would need to be to achieve the same capacity. In addition to greater speed, catamarans draw less water than do monohulls— as little as —and are easier to beach. Catamarans are harder to tack and take up more space in a marina. Cruising catamarans entail added expense for having two engines and two rudders. Tarjan adds that cruising catamarans boats can maintain a comfortable per day passage, with the racing versions recording well over per day. In addition, they do not heel more than 10-12 degrees, even at full speed on a reach.
Powered cruising catamarans share many of the amenities found in a sail cruising catamaran. The saloon typically spans two hulls wherein are found the staterooms and engine compartments. As with sailing catamarans, this configuration minimizes boat motion in a seaway.
The Swiss-registered wave-piercing catamaran, Tûranor PlanetSolar, which was launched in March 2010, is the world's largest solar powered boat. It completed a circumnavigation of the globe in 2012.
Passenger transport
The 1970s saw the introduction of catamarans as high-speed ferries, as pioneered by Westermoen Hydrofoil in Mandal, Norway, which launched the Westamaran design in 1973. The Stena Voyager was an example of a large, fast ferry, typically traveling at a speed of , although it was capable of over .
The Australian island Tasmania became the site of builders of large transport catamarans—Incat in 1977 and Austal in 1988—each building civilian ferries and naval vessels. Incat built HSC Francisco, a High-Speed trimaran that, at 58 knots, is (as of 2014) the fastest passenger ship in service.
Military
The first warship to be propelled by a steam engine, named Demologos or Fulton and built in the United States during the War of 1812, was a catamaran with a paddle wheel between her hulls.
In the early 20th Century several catamarans were built as submarine salvage ships: SMS Vulkan and SMS Cyclop of Germany, Kommuna of Russia, and Kanguro of Spain, all designed to lift stricken submarines by means of huge cranes above a moon pool between the hulls. Two Cold War-era submarine rescue ships, USS Pigeon and USS Ortolan of the US Navy, were also catamarans, but did not have the moon pool feature.
The use of catamarans as high-speed naval transport was pioneered by HMAS Jervis Bay, which was in service with the Royal Australian Navy between 1999 and 2001. The US Military Sealift Command now operates several Expeditionary Fast Transport catamarans owned by the US Navy; they are used for high speed transport of military cargo, and to get into shallow ports.
The Makar-class is a class of two large catamaran-hull survey ships built for the Indian Navy. As of 2012, one vessel, INS Makar (J31), was in service and the second was under construction.
First launched in 2004 at Shanghai, the Houbei class missile boat of the People's Liberation Army Navy (PLAN) has a catamaran design to accommodate the vessel's stealth features.
The Tuo Chiang-class corvette is a class of Taiwanese-designed fast and stealthy multi-mission wave-piercing catamaran corvettes first launched in 2014 for the Republic of China (Taiwan) Navy.
| Technology | Naval transport | null |
196020 | https://en.wikipedia.org/wiki/Crocodilia | Crocodilia | Crocodylia () is an order of semiaquatic, predatory reptiles that are known as crocodilians. They first appeared during the Late Cretaceous and are the closest living relatives of birds. Crocodilians are a type of crocodylomorph pseudosuchian, a subset of archosaurs that appeared about 235 million years ago and were the only survivors of the Triassic–Jurassic extinction event. While other crocodylomorph groups further survived the Cretaceous–Paleogene extinction event, notably sebecosuchians, only the crocodilians have survived into the Quaternary. The order includes the true crocodiles (family Crocodylidae), the alligators and caimans (family Alligatoridae), and the gharial and false gharial (family Gavialidae). Although the term "crocodiles" is sometimes used to refer to all of these families, the term "crocodilians" is less ambiguous.
Extant crocodilians have flat heads with long snouts and tails that are compressed on the sides, with their eyes, ears, and nostrils at the top of the head. Alligators and caimans tend to have broader U-shaped jaws that, when closed, show only the upper teeth, whereas crocodiles usually have narrower V-shaped jaws with both rows of teeth visible when closed. Gharials have extremely slender, elongated jaws. The teeth are conical and peg-like, and the bite is powerful. All crocodilians are good swimmers and can move on land in a "high walk" position, traveling with their legs erect rather than sprawling. Crocodilians have thick skin covered in non-overlapping scales and, like birds, crocodilians have a four-chambered heart and lungs with unidirectional airflow.
Like most other reptiles, crocodilians are ectotherms. They are found mainly in the warm and tropical areas of the Americas, Africa, Asia, and Oceania, usually occupying freshwater habitats, though some can live in saline environments and even swim out to sea. Crocodilians have a largely carnivorous diet; some species like the gharial are specialized feeders while others, like the saltwater crocodile, have generalized diets. They are generally solitary and territorial, though they sometimes hunt in groups. During the breeding season, dominant males try to monopolize available females, which lay their eggs in holes or mounds and, like many birds, they care for their hatched young.
Some species of crocodilians, particularly the Nile crocodile, are known to have attacked humans, which through activities that include hunting, poaching, and habitat destruction are the greatest threat to crocodilian populations. Farming of crocodilians has greatly reduced unlawful trading in skins of wild-caught animals. Artistic and literary representations of crocodilians have appeared in human cultures around the world since Ancient Egypt.
Spelling and etymology
"Crocodilia" and "Crocodylia" have been used interchangeably for decades, starting with Schmidt's re-description of the group from the formerly defunct term Loricata. Schmidt used the older term "Crocodilia", based on Richard Owen's original name for the group. Wermuth chose "Crocodylia" as the proper name, basing it on the type genus Crocodylus (Laurenti, 1768). Dundee, in a revision of many reptilian and amphibian names, argued strongly for "Crocodylia". Following the advent of cladistics and phylogenetic nomenclature, a more-solid justification for one spelling over the other was proposed.
Prior to 1988, Crocodilia was a group that encompassed the modern-day animals, as well as their more-distant relatives that are now classified in the larger groups Crocodylomorpha and Pseudosuchia. Under its current definition as a crown group, rather than a stem-based group, Crocodylia is now restricted to the last common ancestor of today's crocodilians and all of its descendants, living or extinct.
Crocodilia appears to be a Latinism of the Greek word (), which means both lizard and Nile crocodile. Crocodylia, as coined by Wermuth in regards to the genus Crocodylus, appears to be derived from the ancient Greek ()—meaning shingle or pebble—and or (), meaning worm. The name may refer to the animal's habit of resting on the pebbled shores of the Nile.
Phylogeny and evolution
Origins from pseudosuchians
Crocodilians and birds are members of the clade Archosaur. Archosaurs are distinguished from other reptiles particularly by two sets of extra openings in the skull; the antorbital fenestra located in front of the animal's eye socket and the mandibular fenestra on the jaw. Archosaur has two main groups: the Pseudosuchia (crocodilians and their relatives) and the Avemetatarsalia (dinosaurs, pterosaurs, and their relatives). The split between these two groups is assumed to have happened close to the Permian–Triassic extinction event, which is informally known as the Great Dying.
Crocodylomorpha, the group that later give rise to modern crocodilians, emerged in the Late Triassic. The most-basal crocodylomorphs were large, whereas the ones that gave rise to crocodilians were small, slender, and leggy. This evolutionary grade, the "sphenosuchians", first appeared around Carnian of the Late Triassic. They ate small, fast prey and survived into the Late Jurassic. As the Triassic ended, crocodylomorphs became the only surviving pseudosuchians.
Early crocodyliform diversity
During the early Jurassic period, dinosaurs became dominant on land and the crocodylomorphs underwent major adaptive diversifications to fill ecological niches vacated by recently extinguished groups. Mesozoic crocodylomorphs had a much greater diversity of forms than modern crocodilians; they became small, fast-moving insectivores, specialist fish-eaters, marine and terrestrial carnivores, and herbivores. The earliest stage of crocodilian evolution was the protosuchians in the late Triassic and early Jurassic, which were followed by the mesosuchians that diversified widely during the Jurassic and the Tertiary. The eusuchians first appeared during the Early Cretaceous; this clade includes modern crocodilians.
Protosuchians were small, mostly terrestrial animals with short snouts and long limbs. They had bony armor in the form of two rows of plates extending from head to tail; this armor would still be found in later species. Their vertebrae were convex on the two main articulating surfaces. The secondary palate was little developed; it consisted only of a maxilla. The mesosuchians underwent a fusion of the palatine bones to the secondary palate, and a great extension of the nasal passages behind the palatine and in front of the pterygoid bones. This adaptation allowed the animal to breathe through its nostrils while its mouth was open underwater. The eusuchians continued this process; the interior nostrils now opened through an aperture in the pterygoid bones. The vertebrae of eusuchians had one convex and one concave articulating surface. The oldest-known eusuchian is Hylaeochampsa vectiana from the Early Cretaceous whose remains occur on the Isle of Wight in the United Kingdom. It was followed by crocodilians such as the Planocraniidae, the hoofed crocodiles, in the Palaeogene. Spanning the Cretaceous and Palaeogene periods is the genus Borealosuchus of North America, with six species, though its phylogenetic position is not settled.
Diversification of modern crocodilians
The three primary branches of Crocodilia had diverged by the Late Cretaceous. The possible earliest-known members of the group may be Portugalosuchus and Zholsuchus from the Cenomanian-Turonian stages. Some researchers have disputed the classification of Portugalosuchus, claiming it may be outside the crown-group crocodilians. The morphology-based phylogenetic analyses, which are based on new neuroanatomical data obtained from its skull using micro-CT scans, suggest this taxon is a crown-group crocodilian and a member of the 'thoracosaurs' that was recovered as a sister taxon of Thoracosaurus within Gavialoidea, though it is uncertain whether 'thoracosaurs' were true gavialoids.
Definitive alligatoroids first appeared during the Santonian-Campanian stages, while definitive longirostres first appeared during the Maastrichtian stage. The earliest-known alligatoroids and gavialoids include highly derived forms, which indicates the time of the divergence into the three lineages must have been a pre-Campanian event. Additionally, scientists conclude environmental factors played a major role in the evolution of crocodilians and their ancestors; warmer climate is associated with high evolutionary rates and large body sizes.
Relationships
Crocodylia is cladistically defined as the last common ancestor of Gavialis gangeticus (gharial), Alligator mississippiensis (American alligator), and Crocodylus rhombifer (the Cuban crocodile) and all of its descendants. The phylogenetic relationships between crocodilians has been the subject of debate and conflicting results. Many studies and their resulting cladograms ("family trees") of crocodilians have found the "short-snouted" families of Crocodylidae and Alligatoridae to be close relatives, and the long-snouted Gavialidae is a divergent branch of the tree. The resulting group of short-snouted species, named Brevirostres, was mainly supported by morphological studies that analyzed only skeletal features.
Recent molecular studies using DNA sequencing of living crocodilians have rejected the distinct group Brevirostres; the long-snouted gavialids are more closely related to crocodiles than to alligators, and the new grouping of gavialids and crocodiles is named Longirostres.
Below is a cladogram from 2021 showing the relationships of the major extant crocodilian groups. This analysis was based off mitochondrial DNA, including that of the recently extinct Voay robustus:
Anatomy and physiology
Though there is diversity in snout and tooth shape, all crocodilian species have essentially the same body morphology. They have solidly built, lizard-like bodies with wide, cylindrical torsos, flat heads, long snouts, short necks, and tails that are compressed from side to side. Their limbs are reduced in size; the front feet have five mostly non-webbed digits, and the hind feet have four webbed digits and an extra fifth. The pelvis and ribs of crocodilians are modified; the cartilaginous processes of the ribs allow the thorax to collapse when submerging and the structure of the pelvis can accommodate large amounts of food, or more air in the lungs. Both sexes have a cloaca, a single chamber and outlet near the tail into which the intestinal, urinary and genital tracts open. It houses the penis in males and the clitoris in females. The crocodilian penis is permanently erect; it relies on cloacal muscles to protrude it, and elastic ligaments and a tendon to retract it. The gonads are located near the kidneys.
Crocodilians range in size from the dwarf caimans and African dwarf crocodiles, which reach , to the saltwater crocodile and Nile crocodile, which reach and weigh up to . Some prehistoric species such as the late-Cretaceous Deinosuchus were even larger, at up to about and . They tend to be sexually dimorphic; males are much larger than females.
Locomotion
Crocodilians are excellent swimmers. During aquatic locomotion, the muscular tail undulates from side to side to drive the animal through the water while the limbs are held close to the body to reduce drag. When the animal needs to stop or change direction, the limbs are splayed out. Swimming is normally achieved with gentle sinuous movements of the tail, but the animals can move more quickly when pursuing or being pursued. Crocodilians are less well-adapted for moving on land, and are unusual among vertebrates in having two means of terrestrial locomotion: the "high walk" and the "low walk". The ankle joints flex in a different way from those of other reptiles, a feature crocodilians share with some early archosaurs. One of the upper row of ankle bones, the talus bone, moves with the tibia and fibula, while the heel bone moves with the foot and is where the ankle joint is located. The result is the legs can be held almost vertically beneath the body when on land, and the foot swings during locomotion as the ankle rotates.
The limbs move much the same as those of other quadrupeds; the left forelimb moves first, followed by the right hindlimb, then right forelimb, and finally left hindlimb. The high walk of crocodilians, with the belly and most of the tail held off the ground and the limbs held directly under the bodies, resembles that of mammals and birds. The low walk is similar to the high walk, but the body is not raised, and is quite different from the sprawling walk of salamanders and lizards. Crocodilians can instantly change from one walk to the other; the high walk is the usual means of locomotion on land. The animal may immediately push up its body up use this form, or it may take one or two strides of low walk before raising the body. Unlike most other land vertebrates, when crocodilians increase their pace of travel, they increase the speed at which the lower half of each limb (rather than the whole leg) swings forward, so stride length increases while stride duration decreases.
Though they are typically slow on land, crocodilians can produce brief bursts of speed; some can run at for short distances. In some small species, such as the freshwater crocodile, running can progress to galloping, which involves the hind limbs launching the body forward and the fore limbs subsequently taking the weight. Next, the hind limbs swing forward as the spine flexes dorso-ventrally, and this sequence of movements is repeated. During terrestrial locomotion, a crocodilian can keep its back and tail straight because muscles attach the scales to the vertebrae. Whether on land or in water, crocodilians can jump or leap by pressing their tails and hind limbs against the substrate and launching themselves into the air. A fast entry into water from a muddy bank can be effected by plunging to the ground, twisting the body from side to side and splaying out the limbs.
Jaws and teeth
The snout shape of crocodilians varies between species. Alligators and caimans generally have wide, U-shaped snouts while those of crocodiles are typically narrower and V-shaped. The snouts of the gharials are extremely elongated. The muscles that close the jaws are larger and more powerful than the ones that open them, and a human can quite easily hold shut a crocodilian's jaws, but prying open the jaws is extremely difficult. The powerful closing muscles attach at the middle of the lower jaw. The jaw hinge attaches behind the atlanto-occipital joint, giving the animal a wide gape. A folded membrane holds the tongue stationary.
Crocodilians have some of the strongest bite forces in the animal kingdom. In a study published in 2003, an American alligator's bite force was measured at up to ; and in a 2012 study, a saltwater crocodile's bite force was measured at . This study found no correlation between bite force and snout shape. The gharial's extremely slender jaws are relatively weak and are built for quick jaw closure. The bite force of Deinosuchus may have measured , even greater than that of theropod dinosaurs like Tyrannosaurus.
Crocodilian teeth vary from dull and rounded to sharp and pointed. Broad-snouted species have teeth that vary in size, while those of slender-snouted species are more consistent. In general, in crocodiles and gharials, both rows of teeth are visible when the jaws are closed because their teeth fit into grooves along the outside lining of the upper jaw. By contrast, the lower teeth of alligators and caimans normally fit into holes along the inside lining of the upper jaw, so they are hidden when the jaws are closed. Crocodilians are homodonts, meaning each of their teeth are of the same type; they do not have different tooth types, such as canines and molars. Crocodilians are polyphyodonts; they are able to replace each of their approximately 80 teeth up to 50 times in their 35-to-75-year lifespan. Crocodilians are the only non-mammalian vertebrates with tooth sockets. Next to each full-grown tooth is a small replacement tooth and an odontogenic stem cell in the dental lamina that can be activated when required. Tooth replacement slows and eventually stops as the animal ages.
Sense organs
The eyes, ears and nostrils of crocodilians are at the top of the head; this placement allows them to stalk their prey with most of their bodies underwater. When in bright light, the pupils of a crocodilian contract into narrow slits, whereas in darkness they become large circles, as is typical for animals that hunt at night. Crocodilians' eyes have a tapetum lucidum that enhances vision in low light. When the animal completely submerges, the nictitating membranes cover its eyes. Glands on the nictitating membrane secrete a salty lubricant that keeps the eye clean. When a crocodilian leaves the water and dries off, this substance is visible as "tears". While eyesight in air is fairly good, it is significantly weakened underwater. Crocodilians appear to have undergone a "nocturnal bottleneck" early in their history, during which their eyes lost traits like sclerotic rings, an annular pad of the lens and coloured cone oil droplets, giving them dichromatic vision (red-green colourblindness). Since then, some crocodilians appear to have re-evolved full-colour vision.
The ears are adapted for hearing both in air and underwater, and the eardrums are protected by flaps that can be opened or closed by muscles. Crocodilians have a wide hearing range, with sensitivity comparable to most birds and many mammals. Hearing in crocodilians does not degrade as the animal ages because they can regrow and replace hair cells. The well-developed trigeminal nerve allows them to detect vibrations in water, such as those made by potential prey. Crocodilians have a single olfactory chamber and the vomeronasal organ disappears when they reach adulthood. Behavioural and olfactometer experiments indicate crocodiles detect both air-borne and water-soluble chemicals, and use their olfactory system for hunting. When above water, crocodiles enhance their ability to detect volatile odorants by gular pumping, a rhythmic movement of the floor of the pharynx. Crocodiles appear to have lost their pineal organ but still show signs of melatonin rhythms.
Skin and scales
The skin of crocodilians is clad in non-overlapping scales known as scutes that are covered by beta-keratin. Many of the scutes are strengthened by bony plates known as osteoderms. Scutes are most numerous on the back and neck of the animal. The belly and underside of the tail have rows of broad, flat, square-shaped scales. Between crocodilian scales are hinge areas that consist mainly of alpha-keratin. Underneath the surface, the dermis is thick with collagen. Both the head and jaws lack scales and are instead covered in tight, keratinised skin that is fused directly to the bones of the skull and which, over time, develop a pattern of cracks as the skull develops. The skin on the neck and sides is loose. The scutes contain blood vessels and may act to absorb or release heat during thermoregulation. Research also suggests alkaline ions released into the blood from the calcium and magnesium in the dermal bones act as a buffer during prolonged submersion when increasing levels of carbon dioxide would otherwise cause acidosis.
Some scutes contain a single pore known as an integumentary sense organ. Crocodiles and gharials have these on large parts of their bodies, while alligators and caimans only have them on the head. Their exact function is not fully understood, but it has been suggested they may be mechanosensory organs. There are prominent, paired integumentary glands in skin folds on the throat, and others in the side walls of the cloaca. Various functions for these have been suggested; they may play a part in communication—indirect evidence suggests they secrete pheromones used in courtship or nesting. The skin of crocodilians is tough and can withstand damage from conspecifics, and the immune system is effective enough to heal wounds within a few days. In the genus Crocodylus, the skin contains chromatophores, allowing animals to change colour from dark to light and vice versa.
Circulation
Crocodilians may have the most-complex vertebrate circulatory system with a four-chambered heart and two ventricles, an unusual trait among extant reptiles. Both have left and right aorta are connected by a hole called the Foramen of Panizza. Like birds and mammals, crocodilians have vessels that separately direct blood flow to the lungs and the rest of the body. They also have unique, cog-teeth-like valves that when interlocked direct blood to the left aorta and away from the lungs, and then around the body. This system may allow the animals to remain submerged for a lengthy period, but this explanation has been questioned. Other possible reasons for the peculiar circulatory system include assistance with thermoregulatory needs, prevention of pulmonary oedema, and quick recovery from metabolic acidosis. Retention of carbon dioxide within the body permits an increase in the rate of gastric acid secretion and thus the efficiency of digestion, and other gastrointestinal organs such as the pancreas, spleen, small intestine, and liver also function more efficiently.
When submerged, a crocodilian's heart may beat at only once or twice a minute, with little blood flow to the muscle. When it rises and takes a breath, its heart rate almost immediately increases and the muscles receive newly oxygenated blood. Unlike many marine mammals, crocodilians have little myoglobin to store oxygen in their muscles. While diving, an increasing concentration of bicarbonate ions causes haemoglobin in the blood to release oxygen for the muscles.
Respiration
Crocodilians were traditionally thought to breathe like mammals, with airflow tidally moving in and out, but studies published in 2010 and 2013 conclude respiration in crocodilians is more bird-like, with airflow moving in a unidirectional loop within the lungs. During inhalation, air flows through the trachea and into two primary bronchi (airways) that divide into narrower secondary passageways. The air continues to move through these, then into even narrower tertiary airways, and then into other secondary airways that were bypassed the first time. The air then flows back into the primary airways and is exhaled.
In crocodilians, the diaphragmaticus muscle, which is analogous to the diaphragm in mammals, attaches the lungs to the liver and pelvis. During inhalation, the external intercostal muscles expand the ribs, allowing the animal to take in more air, while the ischiopubis muscle causes the hips to swing downwards and push the belly outward, while the diaphragmaticus pulls the liver back. When exhaling, the internal intercostal muscles push the ribs inwards while the rectus abdominis pulls the hips and liver forwards and the belly inward. Crocodilians can also use these muscles to adjust the position of their lungs, controlling their buoyancy in the water. An animal sinks when the lungs are pulled towards the tail and floats when they move back towards the head. This allows them to move through the water without creating disturbances that could alert potential prey. They can also spin and twist by moving their lungs laterally.
When swimming and diving, crocodilians appear to rely on lung volume for buoyancy mare than for oxygen storage. Just before diving, the animal exhales to reduce its lung volume and reach negative buoyancy. When diving, the nostrils of a crocodilian shut tight. All species have a palatal valve, a membranous flap of skin at the back of the oral cavity (mouth) that protects the oesophagus and trachea when the animal is underwater. This enables them to open their mouths underwater without drowning. Crocodilians typically remain underwater for up to fifteen minutes, but under ideal conditions, some can hold their breath for up to two hours. The depth to which crocodilians can dive is unknown, but crocodiles can dive to at least .
Crocodilians vocalize by vibrating vocal folds in the larynx. The folds of the American alligator have a complex morphology consisting of epithelium, lamina propria and muscle, and according to Riede et al. (2015): "it is reasonable to expect species-specific morphologies in vocal folds/analogues as far back as basal reptiles". Although crocodilian vocal folds lack the elasticity of mammalian ones, the larynx is still capable of complex motor control similar to that in birds and mammals, and can adequately control its fundamental frequency.
Digestion
Crocodilian teeth can only hold onto prey, and food is swallowed unchewed. The stomach consists of a grinding gizzard and a digestive chamber. Indigestible items are regurgitated as pellets. The stomach is more acidic than that of any other vertebrate and contains ridges for gastroliths, which play a role in the crushing of food. Digestion takes place more quickly at higher temperatures. When digesting a meal, CO2-rich blood near the lungs is redirected to the stomach, supplying more acid for the oxyntic glands. Compared to crocodiles, alligators digest more carbohydrates relative to protein. Crocodilians have a very low metabolic rate and thus low energy requirements. They can withstand extended fasting by living on stored fat. Even recently hatched crocodiles are able to survive 58 days without food, losing 23% of their bodyweight during this time.
Thermoregulation
Crocodilians are ectotherms, relying mostly on their environment to control their body temperature. The main means of warming is sun's heat, while immersion in water may either raise its temperature via thermal conduction or cool the animal in hot weather. The main method for regulating its temperature is behavioural; temperate-living alligators may start the day by basking in the sun on land and move into the water for the afternoon, with parts of the back breaking the surface so it can still be warmed by the sun. At night, it remains submerged and its temperature slowly falls. The basking period is longer in winter. Tropical crocodiles bask briefly in the morning and move into water for rest of the day. They may return to land at nightfall when the air cools. Animals also cool themselves by gaping the mouth, which cools by evaporation from the mouth lining. By these means, the temperature range of crocodilians is usually maintained between , and mainly stays in the range .
Both the American and Chinese alligator can be found in areas that sometimes experience periods of frost in winter. In cold weather, alligators remain submerged with their tails in deeper, less-cold water and their nostrils projecting just above the surface. If ice forms on the water, they maintain ice-free breathing holes, and there have been occasions when their snouts have become frozen into ice. Temperature-sensing probes implanted in wild American alligators have found their core body temperatures can fall to around , but as long as they remain able to breathe, they show no ill effects when the weather warms.
Osmoregulation
All crocodilians need to maintain a suitable concentration of salt in body fluids. Osmoregulation is related to the quantity of salts and water that are exchanged with the environment. Intake of water and salts occurs across the lining of the mouth, when water is drunk, incidentally while feeding, and when present in foods. Water is lost during breathing, and salts and water are lost in the urine and faeces, through the skin, and in crocodiles and gharials via salt-excreting glands on the tongue. The skin is a largely effective barrier water and ions. Gaping causes water loss by evaporation. Large animals are better able than small ones to maintain homeostasis at times of osmotic stress. Newly hatched crocodilians are much less tolerant of exposure to salt water than are older juveniles, presumably because they have a higher surface-area-to-volume ratio.
The kidneys and excretory system are much the same as those in other reptiles, but crocodilians do not have a bladder. In fresh water, the osmolality (the concentration of solutes that contribute to a solution's osmotic pressure) in the plasma is much higher than that of the surrounding water. The animals are well-hydrated, the urine in the cloaca is abundant and dilute, and nitrogen is excreted as ammonium bicarbonate. Sodium loss is low and mainly occurs through the skin in freshwater conditions. In seawater, the opposite is true; the osmolality in the plasma is lower than that of the surrounding water, causing the animal to dehydrate. The cloacal urine is much more concentrated, white, and opaque, and nitrogenous waste is mostly excreted as insoluble uric acid.
Distribution and habitat
Crocodilians are amphibious, living both in water and on land. The last-surviving, fully terrestrial genus Mekosuchus became extinct about 3,000 years ago after humans had arrived on the Pacific islands it inhabited, making the extinction possibly anthropogenic. Crocodilians are typically creatures of the tropics; the main exceptions are the American and Chinese alligators, whose ranges are the southeastern United States and the Yangtze River, respectively. Florida, United States, is the only place where the ranges of crocodiles and alligators coincide. Crocodilians live almost exclusively in lowland habitiat, and do not appear to live above . With a range extending from eastern India to New Guinea and northern Australia, the saltwater crocodile is the widest-spread species.
Crocodilians use various types of aquatic habitats. Due to their diet, gharials are found in pools and backwaters of rapidly flowing rivers. Caimans prefer warm, turbid lakes and ponds, and slow-moving parts of rivers, although the dwarf caiman inhabits cool, relatively clear, fast-flowing waterways, often near waterfalls. The Chinese alligator is found in slow-moving, turbid rivers that flow across China's floodplains. The highly adaptable American alligator is found in swamps, rivers and lakes with clear or turbid water. Crocodiles live in marshes, lakes and rivers, and can live in saline environments including estuaries and mangrove swamps. American and saltwater crocodiles swim out to sea, no crocodilian species can be considered truly marine. Several extinct species, including the recently extinct Ikanogavialis papuensis, which occurred in coastlines of the Solomon Islands, had marine habitats. Climatic factors locally affect crocodilians' distribution. During the dry season, caimans can be restricted for several months to deep pools in rivers; in the rainy season, much of the savanna in the Orinoco Llanos is flooded, and they disperse widely across the plain. West African crocodiles in the deserts of Mauritania mainly live in gueltas and floodplains but they retreat underground and to rocky shelters, and enter aestivation during the driest periods.
Crocodilians also use terrestrial habitats such as forests, savannas, grasslands and deserts. Dry land is used for basking, nesting and escaping from temperature extremes. Several species make use of shallow burrows on land to keep cool or warm, depending on the environment. Four species of crocodilians climb trees to bask in areas lacking a shoreline. Tropical rainforests bordering rivers and lakes inhabited by crocodilians are of great importance to them, creating microhabitats where they can flourish. The roots of the trees absorb rainwater and slowly release it back into the environment. This keeps crocodilian habitat moist during the dry season while preventing flooding during the wet season.
Behaviour and life history
Adult crocodilians are typically territorial and solitary. Individuals may guard basking spots, nesting sites, feeding areas, nurseries, and overwintering sites. Male saltwater crocodiles defend areas with several female nesting sites year-round. Some species are occasionally gregarious, particularly during droughts, when several individuals gather at remaining water sites. Individuals of some species may share basking sites at certain times of the day.
Feeding
Crocodilians are largely carnivorous. The diets of species varies with snout shape and tooth sharpness. Species with sharp teeth and long, slender snouts, like the Indian gharial and Australian freshwater crocodile, are specialized for snapping fish, insects, and crustaceans. Extremely broad-snouted species with blunt teeth, like the Chinese alligator and the broad-snouted caiman, are equipped for crushing hard-shelled molluscs. Species whose snouts and teeth are intermediate between these two forms, such as the saltwater crocodile and American alligator, have generalized diets and opportunistically feed on invertebrates, fish, amphibians, reptiles, birds and mammals. Though mostly carnivorous, several species of crocodilian have been observed consuming fruit, and this may play a role in seed dispersal.
In general, crocodilians are stalk-and-ambush predators, though hunting strategies vary between species an their prey. Terrestrial prey is stalked from the water's edge, and grabbed and drowned. Gharials and other fish-eating species sweep their jaws from side-to-side to snatch prey; these animals can leap out of water to catch birds, bats and leaping fish. A small prey animal can be killed by whiplash as the predator shakes its head. When foraging for fish in shallow water, caiman use their tails and bodies to herd fish and may dig for bottom-dwelling invertebrates. The smooth-fronted caiman will leave water to hunt terrestrial prey.
Crocodilians are unable to chew and need to swallow food whole, so prey that is too large to swallow is torn into pieces. Crocodilians may be unable to deal with a large animal with a thick hide, and may wait until it becomes putrid and comes apart more easily. To tear a chunk of tissue from a large carcass, a crocodilian continuously spins its body while holding prey with its jaws, a manoeuvrer that is known as the death roll. During cooperative feeding, some individuals may hold onto prey while others perform the roll. The animals do not fight, and each retires with a piece of flesh and awaits its next feeding turn. After feeding together, individuals may depart alone. Crocodilians typically consume prey with their heads above water. The food is held with the tips of the jaws, tossed towards the back of the mouth by an upward jerk of the head and then gulped down. There is no hard evidence crocodilians cache kills for later consumption.
Reproduction and parenting
Crocodilians are generally polygynous, and individual males try to mate with as many females as they can. Monogamous pairings of American alligators have been recorded. Dominant male crocodilians patrol and defend territories, which contain several females. Males of some species, like the American alligator, try to attract females with elaborate courtship displays. During courtship, crocodilian males and females may rub against each other, circle around, and perform swimming displays. Copulation typically occurs in water. When a female is ready to mate, she arches her back while her head and tail dip underwater. The male rubs across the female's neck and grasps her with his hindlimbs, and places his tail underneath hers so their cloacas align and his penis can be inserted. Intermission can last up to 15 minutes, during which time the pair continuously submerge and surface. While dominant males usually monopolise females, single American alligator clutches can be sired by three different males.
Depending on the species, female crocodilians may construct either holes or mounds as nests, the latter made from vegetation, litter, sand or soil. Nests are typically found near dens or caves. Those made by different females are sometimes close to each other, particularly in hole-nesting species. Clutches may contain between ten and fifty eggs. Crocodilian eggs are protected by hard shells made of calcium carbonate. The incubation period is two to three months. The sex of the developing, incubating young is temperature dependant; constant nest temperatures above produce more males, while those below produce more females. Sex in crocodilians may be established in a short period of time, and nests are subject to changes in temperature. Most natural nests produce hatchlings of both sexes, though single-sex clutches occur.
All of the hatchlings in a clutch may leave the nest on the same night. Crocodilians are unusual among reptiles in the amount of parental care provided after the young hatch. The mother helps excavate hatchlings from the nest and carries them to water in her mouth. Newly hatched crocodilians gather together and follow their mother. Both male and female adult crocodilians will respond to vocalizations by hatchlings. Female spectacled caimans in the Venezuelan Llanos are known to leave their young in nurseries or crèches, and one female guards them. Hatchlings of some species tend to bask in a group during the day and start to forage separately in the evening. The time it takes young crocodilians to reach independence can vary. For American alligators, groups of young associate with adults for one-to-two years while juvenile saltwater and Nile crocodiles become independent in a few months.
Communication
Crocodilians are the most vocal of the non-avian reptiles. They can communicate with sounds, including barks, bellows, chirps, coughs, growls, grunts, hisses, moos, roars, toots and whines. Young start communicating with each other before they are hatched. It has been shown the young will repeat, one after another, a light tapping noise near the nest. This early communication may help young to hatch simultaneously. After breaking out of the egg, a juvenile produces yelps and grunts, either spontaneously or as a result of external stimuli. Even unrelated adults respond quickly to juvenile distress calls.
Juveniles are highly vocal, both when scattering in the evening and congregating in the morning. Nearby adults, presumably the parents, may warn young of predators or alert them to the presence of food. The range and quantity of vocalisations vary between species. Alligators and caimans are the noisiest while some crocodile species are almost completely silent. In some crocodile species, individuals "roar" at others when they get too close. The American alligator is exceptionally noisy; it emits a series of up to seven throaty bellows, each a couple of seconds long, at ten-second intervals. It also makes various grunts, growls and hisses. Males create vibrations in water to send out infrasonic signals that attract females and intimidate rivals. The enlarged boss of the male gharial may serve as a sound resonator.
The head slap is another form of acoustic communication. This typically starts when an animal in water elevates its snout and remaining stationary. After some time, the jaws are sharply opened then clamped shut with a biting motion that makes a loud, slapping sound that is immediately followed by a loud splash, after which the head may immerse below the surface and blow bubbles from the throat or nostrils. Some species then roar while others slap water with their tails. Episodes of head slapping spread through the group. The purpose varies and it seems have a social function, and is also used in courtship. Dominant individuals intimidate rivals by swimming at the surface and displaying their large body size, and subordinates submit by holding their head forward above water with the jaws open and then flee below.
Growth and mortality
Eggs and hatchlings have a high death rate, and nests face threats from floods, drying, overheating, and predators. Flooding is a major cause of failure of crocodilians to successfully breed; nests are submerged, developing embryos are deprived of oxygen and juveniles are swept away. Despite the maternal care they receive, eggs and hatchlings are commonly lost to predation. Predators, both mammalian and reptilian, may raid nests and eat crocodilian eggs. After hatching and reaching water, young are still under threat.
In addition to terrestrial predators, young are subject to aquatic attacks by fish. Birds take their toll, and malformed individuals are unlikely to survive. In northern Australia, the survival rate for saltwater crocodile hatchlings is 25 percent but this improves with each year of life, reaching up to 60 percent by year five. Mortality rates among subadults and adults are low, though they are occasionally preyed upon by large cats and snakes. Elephants and hippopotamuses may defensively kill crocodiles. Authorities are uncertain how much cannibalism occurs among crocodilians. Adults do not normally eat their own offspring but there is some evidence of subadults feeding on juveniles, while subadults may be preyed on by adults. Adults appear more likely to protect juveniles and may chase away subadults from nurseries. Rival male Nile crocodiles sometimes kill each other during the breeding season.
Growth in hatchlings and young crocodilians depends on the food supply. Animals reach sexual maturity at a certain length, regardless of age. Saltwater crocodiles reach maturity at for females and for males. Australian freshwater crocodiles take ten years to reach maturity at . The spectacled caiman matures earlier, reaching its mature length of in four to seven years. Crocodilians continue to grow throughout their lives; males in particular continue to gain weight as they age, but this is mostly in the form of extra girth rather than length. Crocodilians can live for 35–75 years; their age can be determined by growth rings in their bones.
Cognition
Crocodilians are among the most cognitively complex non-avian reptiles. Embryological studies of developing amniotes have shown similar brain structures in the telencephalon between crocodilians, mammals and birds. Accordingly, several behaviours that were once thought to be unique to mammals and birds have been recently discovered in crocodilians. Some crocodilian species have been observed using sticks and branches to lure nest-building birds, though other authors have argued the purpose, if any, of stick-displaying is at best ambiguous. Several species have been observed to hunt cooperatively, herding and chasing prey. Play, or the free, intrinsically motivated activity of young individuals, has been observed on numerous occasions in crocodilians in captive and wild settings; young alligators and crocodiles regularly engage in object play and social play. Not all higher social behaviours are endemic across these clades; a 2023 study of a tinamou bird and American alligator test subjects found alligators do not appear to engage in visual perspective-taking like the birds. Some researchers have proposed an increase in the use of crocodilians as test animals in comparative cognition studies.
Interactions with humans
Attacks
Crocodilians are opportunistic predators that are at their most dangerous in water and on shorelines. Several species are known to attack humans and may do so to defend their territories, nests or young; these attacks occur either unintentionally while the animal is attacking domestic animals such as dogs, or deliberately for food. Large crocodilians can take prey as big as or bigger than humans. Most of the data about such attacks involve the saltwater crocodile, the Nile crocodile, the mugger crocodile, the American crocodile, the American alligator and the black caiman. Other species that often attack humans are Morelet's crocodile and the spectacled caiman.
It is estimated over 1,000 attacks by the Nile crocodile occurred between 2010 and 2020, almost 70% of which were fatal. The species is considered to be the most-dangerous large predator in Africa, particularly because it is both widespread and numerous. It can easily sneak up on people or domestic animals at the edge of water. Fishers, bathers, waders and those washing clothes are particularly vulnerable. Once grabbed and dragged into water, it is unlikely the victim will escape. Analysis of attacks show most-such attacks take place when crocodiles are guarding nests or newly hatched young.
Saltwater crocodiles have been implicated in over 1,300 attacks on humans between 2010 and 2020, almost half of which fatal. Animals of various sizes may attack humans but large males are generally responsible for fatalities. Large animals require large prey, and humans are the correct size. Most victims of attacks by saltwater crocodile attacks have been in water but they occasionally occur on land. Saltwater crocodiles sometimes attack boats but do not usually appear to be targeting the occupanys. Attacks may occur when a human encroaches on the crocodile's territory. American alligators were responsible for 127 recorded attacks between 2010 and 2020, only six of which were fatal. Alligators are considered to be less aggressive than Nile and saltwater crocodiles, but the increase in density of the human population in the Everglades has brought people and alligators into proximity, increasing the risk of alligator attacks.
Uses
Crocodilians have been hunted for their skin, meat and bones. Their tough skin has been used to make handbags, coats, footwear, wallets and other items. The meat has been compared to that of chicken and may be used as an aphrodisiac. The bones, teeth and pickled heads of crocodilians are used as souvenirs, while other tissues and fluids are ingredients in traditional medicine. Crocodile farms have been established to meet the demand for crocodilian products; species bred on these farms are listed under Appendix II of CITES, which allows regulated trade. A study examining alligator farms in the United States showed they have generated significant conservation gains and poaching of wild alligators has greatly diminished.
Several species of crocodilian are traded as exotic pets. They are appealing when young but crocodilians do not make good pets; they grow large, and are dangerous and expensive to keep. As they grow older, pet crocodilians are often abandoned by their owners, and feral populations of spectacled caimans exist in the United States and Cuba. Most countries have strict regulations for keeping these reptiles.
The blood of alligators and crocodiles contains peptides with antibiotic properties that may contribute to future antibacterial drugs. Cartilage from farm-raised crocodiles is used in research aiming to 3D-print new cartilage for humans by mixing human stem cells with liquefied crocodile cartilage after proteins that may trigger the human immune system have been removed.
Conservation
The IUCN Red List of Threatened Species recognises 26 species of crocodilian and classes 11 of them as threatened including:
Critically Endangered: Chinese alligator, Philippine Crocodile, Orinoco crocodile, Siamese crocodile, Cuban Crocodile, African Slender-snouted crocodile and gharial.
Endangered: False gharial
Vulnerable: American crocodile, mugger crocodile, and dwarf crocodile.
The main threat to crocodilians worldwide is human activity, including hunting and habitat destruction. Early in the 1970s, more than 2 million wild crocodilian skins had been traded, depleting the majority of crocodilian populations, in some cases almost to extinction. Starting in 1973, CITES attempted to prevent trade in body parts of endangered animals, such as crocodile skins. This proved to be problematic in the 1980s because in some parts of Africa, crocodiles were abundant and dangerous to humans, and hunting them was legal. At the Conference of the Parties in Botswana in 1983, it was argued on behalf of aggrieved local people the selling of lawfully hunted skins was reasonable. In the late 1970s, crocodile farming began in different countries, starting from eggs taken from the wild. By the 1980s, farmed crocodile skins were produced in sufficient numbers to greatly diminish the unlawful trade in wild crocodilians. By 2000, skins from twelve crocodilian species, whether harvested lawfully in the wild or farmed, were traded by thirty countries and the unlawful trade had almost vanished.
The gharial was historically widespread in the major river systems of India but has undergone a chronic decline since 1943. Major threats have included prolific hunting, accidental catching and water blockage from damns. The gharial population continues to be threatened by environmental hazards such as heavy metals and protozoan parasites. Protection of nests against egg predators has been shown to increase population numbers. The Chinese alligator was historically widespread in the eastern Yangtze River system but is currently restricted to some areas in south-eastern Anhui due to habitat fragmentation and degradation. The wild population is believed to exist only in small, fragmented ponds. In 1972, the Chinese government declared the species a Class I endangered species and it received the maximum amount of legal protection. Since 1979, captive breeding programmes were established in China and North America, creating a healthy captive population. In 2008, alligators bred in the Bronx Zoo were successfully reintroduced to Chongming Island. The Philippine crocodile may be the most-threatened crocodilian; hunting and destructive fishing habits have reduced its numbers to around 100 individuals by 2009. In the same year, 50 captive-bred crocodiles were released into the wild to help boost the population. Support from local people is crucial for the species' survival.
The American alligator has also undergone serious declines from hunting and habitat loss throughout its range, threatening it with extinction. In 1967, it was listed as an endangered species but the United States Fish and Wildlife Service and state wildlife agencies in the southern United States stepped in and worked towards its recovery. Protection allowed the species to recuperate and in 1987 it was removed from the endangered species list. In Australia, the saltwater crocodile was heavily hunted and was reduced to five percent of its historical numbers in the Northern Territory by 1971. Since then, the species was given legal protections and its numbers had greatly increased by 2001.
Cultural depictions
In mythology and folklore
Crocodilians have prominent roles in the narratives of various cultures around the world and may have inspired stories of dragons. In Ancient Egyptian religion, both Ammit the devourer of unworthy souls and Sobek the god of power, protection and fertility are represented as having crocodile heads. This reflects Ancient Egyptians' view of the crocodile as both a terrifying predator and an important part of the Nile ecosystem. The crocodile was one of several animals the Egyptians mummified. West African peoples also associated crocodiles with water deities. During the Benin Empire, crocodiles symbolised the power of the oba (king) and linked him to the life-giving rivers. The Leviathan described in the Book of Job may have been based on a crocodile. In Mesoamerica, the Aztecs had a crocodilian god of fertility named Cipactli who protected crops. In Aztec mythology, the earth deity Tlaltecuhtli is send to bond with a "great caiman". The Maya also worshipped crocodilian gods and believed the world is supported on the back of a swimming crocodile.
The gharial features in the folk tales of India. In one story, a gharial and a monkey become friends when the monkey gives the gharial fruit but the friendship ends after the gharial confesses it tried to lure the monkey into a house to eat it. Native American and African American folk tales often pair an alligator with a trickster rabbit; Br'er Rabbit in the African American stories. An Australian Dreamtime story tells of a crocodile ancestor who had fire all to himself until a rainbow bird stole fire-sticks for man; hence the crocodile lives in water.
In literature and media
Ancient historians have described crocodilians from the earliest written records, though often their descriptions contain as much assumption as observation. The Ancient Greek historian Herodotus (c. 440 BC) described the crocodile in detail, though much of his description is fanciful; he claimed the crocodile would lie with its mouth open to permit a "trochilus" bird, possibly an Egyptian plover, to remove leeches. The crocodile was described in the late-13th century Rochester Bestiary, which is based on classical sources, including Pliny's Historia naturalis (c. 79 AD) and Isidore of Seville's Etymologies. Isidore said the crocodile is named for its saffron colour (Latin croceus, 'saffron') and may be killed by fish with serrated crests sawing into its soft underbelly.
Since the ninth-century text Bibliotheca by Photios I of Constantinople, crocodiles have been reputed to weep for their victims. The story became widely known in 1400 when the English traveller John Mandeville wrote his description of "cockodrills":
In that country [of Prester John] and by all Ind [India] be great plenty of cockodrills, that is a manner of a long serpent, as I have said before. And in the night they dwell in the water, and on the day upon the land, in rocks and in caves. And they eat no meat in all the winter, but they lie as in a dream, as do the serpents. These serpents slay men, and they eat them weeping; and when they eat they move the over jaw, and not the nether jaw, and they have no tongue.
Crocodilians have been recurring characters in stories for children, such as Roald Dahl's The Enormous Crocodile (1978) and Emily Gravett's The Odd Egg (2008). Lewis Carroll's Alice's Adventures in Wonderland (1865) contains the poem How Doth the Little Crocodile. In J. M. Barrie's novel Peter and Wendy (1911), Captain Hook losses his hand to a crocodile. In Rudyard Kipling's Just So Stories (1902), the Elephant's Child acquires his trunk by having his nose pulled very hard by a crocodile.
In movies and shows, crocodilians are often represented as dangerous water obstacles or as monstrous man-eaters, as in the horror films Eaten Alive (1977), Alligator (1980), Lake Placid (1999), Crocodile (2000), Primeval (2007) and Black Water (2007). In the film Crocodile Dundee (1986), the title character Mick Dundee's nickname comes from the animal that bit off his leg. Some media texts, such as Steve Irwin's wildlife documentary series The Crocodile Hunter, have attempted to portray crocodiles in more-positive or educational tone.
| Biology and health sciences | Reptiles | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.