id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
235899 | https://en.wikipedia.org/wiki/Circuit%20breaker | Circuit breaker | A circuit breaker is an electrical safety device designed to protect an electrical circuit from damage caused by current in excess of that which the equipment can safely carry (overcurrent). Its basic function is to interrupt current flow to protect equipment and to prevent fire. Unlike a fuse, which operates once and then must be replaced, a circuit breaker can be reset (either manually or automatically) to resume normal operation.
Circuit breakers are commonly installed in distribution boards. Apart from its safety purpose, a circuit breaker is also often used as a main switch to manually disconnect ("rack out") and connect ("rack in") electrical power to a whole electrical sub-network.
Circuit breakers are made in varying current ratings, from devices that protect low-current circuits or individual household appliances, to switchgear designed to protect high-voltage circuits feeding an entire city. Any device which protects against excessive current by automatically removing power from a faulty system, such as a circuit breaker or fuse, can be referred to as an over-current protection device (OCPD).
Origins
An early form of circuit breaker was described by Thomas Edison in an 1879 patent application, although his commercial power distribution system used fuses. Its purpose was to protect lighting circuit wiring from accidental short circuits and overloads. A modern miniature circuit breaker similar to the ones now in use was patented by Brown, Boveri & Cie in 1924. Hugo Stotz, an engineer who had sold his company to Brown, Boveri & Cie, was credited as the inventor on German patent 458392. Stotz's invention was the forerunner of the modern thermal-magnetic breaker commonly used in household load centers to this day.
Interconnection of multiple generator sources into an electrical grid required the development of circuit breakers with increasing voltage ratings and increased ability to safely interrupt the increasing short-circuit currents produced by networks. Simple air-break manual switches produced hazardous arcs when interrupting high-voltage circuits; these gave way to oil-enclosed contacts, and various forms using the directed flow of pressurized air, or pressurized oil, to cool and interrupt the arc. By 1935, the specially constructed circuit breakers used at the Boulder Dam project used eight series breaks and pressurized oil flow to interrupt faults of up to 2,500 MVA, in three AC cycles.
Operation
All circuit breaker systems have common features in their operation, but details vary substantially depending on the voltage class, current rating and type of the circuit breaker.
The circuit breaker must first detect a fault condition. In small mains and low-voltage circuit breakers, this is usually done within the device itself. Typically, the heating or magnetic effects of electric current are employed. Circuit breakers for large currents or high voltages are usually arranged with protective relay pilot devices to sense a fault condition and to operate the opening mechanism. These typically require a separate power source, such as a battery, although some high-voltage circuit breakers are self-contained with current transformers, protective relays, and internal power sources.
Once a fault is detected, the circuit breaker contacts must open to interrupt the circuit; this is commonly done using mechanically stored energy contained within the breaker, such as a spring or compressed air to separate the contacts. A breaker may also use the higher current caused by the fault to separate the contacts, via thermal expansion or increased magnetic field. A small circuit breaker typically has a manual control lever to switch the circuit off or reset a tripped breaker, while a larger unit may use a solenoid to trip the mechanism, and an electric motor to restore energy to springs (which rapidly separate contacts when the breaker is tripped).
The circuit breaker contacts must carry the load current without excessive heating, and must also withstand the heat of the arc produced when interrupting (opening) the circuit. Contacts are made of copper or copper alloys, silver alloys and other highly conductive materials. Service life of the contacts is limited by the erosion of contact material due to arcing while interrupting the current. Miniature and molded-case circuit breakers are usually discarded when the contacts have worn, but power circuit breakers and high-voltage circuit breakers have replaceable contacts.
When a high current or voltage is interrupted, an arc is generated. The maximum length of the arc is generally proportional to the voltage while the intensity (or heat) is proportional to the current. This arc must be contained, cooled and extinguished in a controlled way, so that the gap between the contacts can again withstand the voltage in the circuit. Different circuit breakers use vacuum, air, insulating gas, or oil as the medium the arc forms in. Different techniques are used to extinguish the arc including:
Lengthening or deflecting the arc
Intensive cooling (in jet chambers)
Division into partial arcs
Zero-point quenching (contacts open at the moment in the AC waveform at which the current and potential are near zero, effectively breaking no load current at the time of opening. The zero-crossing occurs at twice the line frequency; i.e., 100 times per second for 50 Hz and 120 times per second for 60 Hz AC.)
Connecting capacitors in parallel with contacts in DC circuits.
Finally, once the fault condition has been cleared, the contacts must again be closed to restore power to the interrupted circuit.
Arc interruption
Low-voltage miniature circuit breakers (MCB) use air alone to extinguish the arc. These circuit breakers contain so-called arc chutes, a stack of mutually insulated parallel metal plates that divide and cool the arc. By splitting the arc into smaller arcs the arc is cooled down while the arc voltage is increased and serves as an additional impedance that limits the current through the circuit breaker. The current-carrying parts near the contacts provide easy deflection of the arc into the arc chutes by a magnetic force of a current path, although magnetic blowout coils or permanent magnets could also deflect the arc into the arc chute (used on circuit breakers for higher ratings). The number of plates in the arc chute is dependent on the short-circuit rating and nominal voltage of the circuit breaker.
In larger ratings, oil circuit breakers rely upon vaporization of some of the oil to blast a jet of oil through the arc.
Gas (usually sulfur hexafluoride) circuit breakers sometimes stretch the arc using a magnetic field, and then rely upon the dielectric strength of the sulfur hexafluoride (SF6) to quench the stretched arc.
Vacuum circuit breakers have minimal arcing (as there is nothing to ionize other than the contact material). The arc is quenched when it is stretched a very small amount (less than ). Vacuum circuit breakers are frequently used in modern medium-voltage switch gear to .
Air circuit breakers may use compressed air to blow out the arc, or alternatively, the contacts are rapidly swung into a small sealed chamber, the escaping of the displaced air thus blowing out the arc.
Circuit breakers are usually able to terminate all current very quickly: typically the arc is extinguished between 30 and 150 ms after the mechanism has been tripped, depending upon age and construction of the device. The maximum current value and let-through energy determine the quality of the circuit breakers.
Short circuit
Circuit breakers are rated both by the normal current that they are expected to carry, and the maximum short-circuit current that they can safely interrupt. This latter figure is the ampere interrupting capacity (AIC) of the breaker.
Under short-circuit conditions, the calculated or measured maximum prospective short-circuit current may be many times the normal, rated current of the circuit. When electrical contacts open to interrupt a large current, there is a tendency for an arc to form between the opened contacts, which would allow the current to continue. This condition can create conductive ionized gases and molten or vaporized metal, which can cause the further continuation of the arc or create additional short circuits, potentially resulting in the explosion of the circuit breaker and the equipment that it is installed in. Therefore, circuit breakers incorporate various features to divide and extinguish the arcs.
The maximum short-circuit current that a breaker can interrupt is determined by testing. Application of a breaker in a circuit with a prospective short-circuit current higher than the breaker's interrupting capacity rating may result in failure of the breaker to safely interrupt a fault. In a worst-case scenario, a breaker may successfully interrupt a fault only to explode when reset.
Typical domestic panel circuit breakers are rated to interrupt () short-circuit current.
Miniature circuit breakers used to protect control circuits or small appliances may not have sufficient interrupting capacity to use at a panel board; these circuit breakers are called "supplemental circuit protectors" to distinguish them from distribution-type circuit breakers.
Standard current ratings
Circuit breakers are manufactured with standard ratings, using a system of preferred numbers to create a useful selection of ratings. A miniature circuit breaker has a fixed trip setting; changing the operating current value requires replacing the whole circuit breaker. Circuit breakers with higher ratings can have adjustable trip settings, allowing fewer standardized products to be used, adjusted to the applicable precise ratings when installed. For example, a circuit breaker with a 400 ampere frame size might have its over-current detection threshold set only 300 amperes where that rating is appropriate.
For low-voltage distribution circuit breakers an international standard, IEC 60898-1, defines rated current as the maximum current that a breaker is designed to carry continuously. The commonly available preferred values for rated current are 1A, 2A, 4A, 6 A, 10 A, 13 A, 16 A, 20 A, 25 A, 32 A, 40 A, 50 A, 63 A, 80 A, 100 A, and 125 A. The circuit breaker is labeled with the rated current in amperes prefixed by a letter, which indicates the instantaneous tripping current that causes the circuit breaker to trip without intentional time delay expressed in multiples of the rated current:
Circuit breakers are also rated by the maximum fault current that they can interrupt; this allows use of more economical devices on systems unlikely to develop the high short-circuit current found on, for example, a large commercial building distribution system.
In the United States, Underwriters Laboratories (UL) certifies equipment ratings, called Series Ratings (or "integrated equipment ratings") for circuit breaker equipment used for buildings. Power circuit breakers and medium- and high-voltage circuit breakers used for industrial or electric power systems are designed and tested to ANSI or IEEE standards in the C37 series. For example, standard C37.16 lists preferred frame size current ratings for power circuit breakers in the range of 600 to 5000 amperes. Trip current settings and time–current characteristics of these breakers are generally adjustable.
For medium- and high-voltage circuit breakers used in switchgear, substations and generating stations, relatively few standard frame sizes are generally manufactured. These circuit breakers are usually controlled by separate protective relay systems, offering adjustable tripping current and time settings as well as allowing for more complex protection schemes.
Types
Many classifications of circuit breakers can be made, based on their features such as voltage class, construction type, interrupting type, and structural features.
Low-voltage
Low-voltage (less than 1,000 VAC) types are common in domestic, commercial and industrial application, and include:
Miniature circuit breaker (MCB)—rated current up to 125 A. Trip characteristics normally not adjustable. Thermal or thermal-magnetic operation. Breakers illustrated above are in this category.
Molded Case Circuit Breaker (MCCB)—rated current up to 1,600 A. Thermal or thermal-magnetic operation. Trip current may be adjustable in higher-rated units.
Low-voltage power circuit breakers can be mounted in multiple tiers in low-voltage switchboards or switchgear cabinets.
The characteristics of low-voltage circuit breakers are given by international standards such as IEC 947. These circuit breakers are often installed in draw-out enclosures that allow removal and interchange without dismantling the switchgear.
Large low-voltage molded-case and power circuit breakers may have electric motor operators so they can open and close under remote control. These may form part of an automatic transfer switch system for standby power.
Low-voltage circuit breakers are also made for direct-current (DC) applications, such as for subway lines. Direct current requires special breakers because the arc is continuous—unlike an AC arc, which tends to go out on each half cycle, a direct-current circuit breaker has a blow-out coil that generates a magnetic field that rapidly stretches the arc. Small circuit breakers are either installed directly in equipment or arranged in breaker panels.
The DIN-rail-mounted thermal-magnetic miniature circuit breaker is the most common style in modern domestic consumer units and commercial electrical distribution boards throughout Europe. The design includes the following components:
Actuator leverused to manually trip and reset the circuit breaker. Also indicates the status of the circuit breaker (On or Off/tripped). Most breakers are designed so they can still trip even if the lever is held or locked in the "on" position. This is sometimes referred to as "free trip" or "positive trip" operation.
Actuator mechanismforces the contacts together or apart.
Contactsallow current when touching and break the current when moved apart.
Terminals
Bimetallic stripseparates contacts in response to smaller, longer-term overcurrents
Calibration screwallows the manufacturer to precisely adjust the trip current of the device after assembly.
Solenoidseparates contacts rapidly in response to high overcurrents
Arc divider/extinguisher
Solid-state
Solid-state circuit breakers (SSCBs), also known as digital circuit breakers, are a technological innovation which promises to advance circuit breaker technology out of the mechanical level, into the electrical. This promises several advantages, such as acting much more quickly (breaking circuits in fractions of microseconds), better monitoring of circuit loads and longer lifetimes. Solid-state circuit breakers have been developed for medium-voltage DC power and can use silicon carbide transistors or integrated gate-commutated thyristors (IGCTs) for switching.
Magnetic
A magnetic circuit breaker uses a solenoid (electromagnet) whose pulling force increases with the current. Certain designs utilize electromagnetic forces in addition to those of the solenoid. The circuit breaker contacts are held closed by a latch. As the current in the solenoid increases beyond the rating of the circuit breaker, the solenoid's pull releases the latch, which lets the contacts open by spring action. They are the most commonly used circuit breakers in the United States.
Thermal–magnetic
A thermal–magnetic circuit breaker, which is the type found in most distribution boards in Europe and countries with a similar wiring arrangement, incorporates both techniques with the electromagnet responding instantaneously to large surges in current (such as short circuits) and the bimetallic strip responding to lesser but longer-term over-current conditions. The thermal portion of the circuit breaker provides a time-response feature that trips the circuit breaker sooner for larger over-currents but allows smaller overloads to persist for a longer time. This allows short current spikes such as are produced when a motor or other non-resistive load is switched on. With a very large over-current, such as may be caused short circuit, the magnetic element trips the circuit breaker with no intentional additional delay.
Magnetic–hydraulic
A magnetic–hydraulic circuit breaker uses a solenoid coil to provide operating force to open the contacts. A magnetic–hydraulic breaker incorporates a hydraulic time delay feature using a viscous fluid. A spring restrains the core until the current exceeds the breaker rating. During an overload, the speed of the solenoid motion is restricted by the fluid. The delay permits brief current surges beyond normal running current for motor starting, energizing equipment, etc. Short-circuit currents provide sufficient solenoid force to release the latch regardless of core position thus bypassing the delay feature. Ambient temperature affects the time delay but does not affect the current rating of a magnetic breaker.
A large power circuit breaker, such as one applied in circuits of more than 1000 volts, may incorporate hydraulic elements in the contact operating mechanism. Hydraulic energy may be supplied by a pump or stored in accumulators. These form a distinct type from oil-filled circuit breakers where oil is the arc-extinguishing medium.
Common-trip (ganged) breakers
To provide simultaneous breaking on multiple circuits from a fault on any one, circuit breakers may be made as a ganged assembly. This is a very common requirement for three-phase systems, where breaking may be either three- or four-pole (solid or switched neutral). Some makers make ganging kits to allow groups of single-phase breakers to be interlinked as required.
In the US, where split-phase supplies are common, in a branch circuit with more than one live conductor, each live conductor must be protected by a breaker pole. To ensure that all live conductors are interrupted when any pole trips, a common-trip set of breakers must be used. These may either contain two or three tripping mechanisms within one case or, for small breakers, have the breakers externally tied together via their operating handles. Two-pole common-trip breakers are common on 120/240-volt systems where 240 volt loads (including major appliances or further distribution boards) span the two live wires. Three-pole common-trip breakers are typically used to supply three-phase power to powerful motors or further distribution boards.
Separate circuit breakers must never be used for live and neutral, because if the neutral is disconnected while the live conductor stays connected, a very dangerous condition arises: the circuit appears de-energized (appliances don't work), but wires remain live and some residual-current devices (RCDs) may not trip if someone touches the live wire (because some RCDs need power to trip). This is why only common-trip breakers must be used when neutral wire switching is needed.
Shunt-trip units
A shunt-trip unit appears similar to a normal breaker and the moving actuators are ganged to a normal breaker mechanism to operate together in a similar way, but the shunt trip is a solenoid intended to be operated by an external constant-voltage signal, rather than a current, commonly the local mains voltage or DC. These are often used to cut the power when a high-risk event occurs, such as a fire or flood alarm, or another electrical condition, such as over-voltage detection. Shunt trips may be a user-fitted accessory to a standard breaker or supplied as an integral part of a circuit breaker.
Medium-voltage
Medium-voltage circuit breakers rated between 1 and 72kV may be assembled into metal-enclosed switchgear line-ups for indoor use or may be individual components installed outdoors in a substation. Air-break circuit breakers replaced oil-filled units for indoor applications, but are now themselves being replaced by vacuum circuit breakers (up to about 40.5kV). Like the high-voltage circuit breakers described below, these are also operated by current-sensing protective relays operated through current transformers. The characteristics of MV breakers are given by international standards such as IEC 62271. Medium-voltage circuit breakers nearly always use separate current sensors and protective relays instead of relying on built-in thermal or magnetic overcurrent sensors.
Medium-voltage circuit breakers can be classified by the medium used to extinguish the arc:
Vacuum circuit breakers—With rated current up to 6,300A, and higher for generator circuit breakers application (up to 16,000A & 140kA). These breakers interrupt the current by creating and extinguishing the arc in a vacuum containeraka "bottle". Long life bellows are designed to travel the 6–10 mm the contacts must part. These are generally applied for voltages up to about 40,500V, which corresponds roughly to the medium-voltage range of power systems. Vacuum circuit breakers have longer life expectancy between overhaul than do other circuit breakers. In addition their global warming potential is by far lower than SF6 circuit breaker.
Air circuit breakers—Rated current up to 6,300A and higher for generator circuit breakers. Trip characteristics are often fully adjustable including configurable trip thresholds and delays. Usually electronically controlled, though some models are microprocessor controlled via an integral electronic trip unit. Often used for main power distribution in large industrial plant, where the breakers are arranged in draw-out enclosures for ease of maintenance.
SF6 circuit breakers extinguish the arc in a chamber filled with sulfur hexafluoride gas.
Medium-voltage circuit breakers may be connected into the circuit by bolted connections to bus bars or wires, especially in outdoor switchyards. Medium-voltage circuit breakers in switchgear line-ups are often built with draw-out construction, allowing breaker removal without disturbing power circuit connections, using a motor-operated or hand-cranked mechanism to separate the breaker from its enclosure.
High-voltage
Electrical power transmission networks are protected and controlled by high-voltage breakers. The definition of high voltage varies but in power transmission work is usually thought to be 72.5 kV or higher, according to a recent definition by the International Electrotechnical Commission (IEC). High-voltage breakers are nearly always solenoid-operated, with current sensing protective relays operated through current transformers. In substations the protective relay scheme can be complex, protecting equipment and buses from various types of overload or ground/earth fault.
High-voltage breakers are broadly classified by the medium used to extinguish the arc:
Bulk oil
Minimum oil
Air blast
Vacuum
SF6
CO2
Due to environmental and cost concerns over insulating oil spills, most new breakers use SF6 gas to quench the arc.
Circuit breakers can be classified as live tank, where the enclosure that contains the breaking mechanism is at line potential, or dead tank with the enclosure at earth potential. High-voltage AC circuit breakers are routinely available with ratings up to 765 kV. 1,200kV breakers were launched by Siemens in November 2011, followed by ABB in April the following year.
High-voltage circuit breakers used on transmission systems may be arranged to allow a single pole of a three-phase line to trip, instead of tripping all three poles; for some classes of faults this improves the system stability and availability.
High-voltage direct current circuit breakers are still a field of research as of 2015. Such breakers would be useful to interconnect HVDC transmission systems.
Sulfur hexafluoride (SF6) high-voltage
A sulfur hexafluoride circuit breaker uses contacts surrounded by sulfur hexafluoride gas to quench the arc. They are most often used for transmission-level voltages and may be incorporated into compact gas-insulated switchgear. In cold climates, supplemental heating or different gas mixtures are used for high voltage circuit breakers, due to liquefaction of the SF6 gas. In some northern power grids, gas mixtures of N2 and SF6, or CF4 and SF6 are installed in puffer-type HVCB to quench the arc without any liquefaction of the gas. https://ieeexplore.ieee.org/abstract/document/4080767. The minimum temperature ratings of these models are as low as -50 °C for some northern substations.
Disconnecting circuit breaker (DCB)
The disconnecting circuit breaker (DCB) was introduced in 2000 and is a high-voltage circuit breaker modeled after the SF6-breaker. It presents a technical solution where the disconnecting function is integrated in the breaking chamber, eliminating the need for separate disconnectors. This increases the availability, since open-air disconnecting switch main contacts need maintenance every 2–6 years, while modern circuit breakers have maintenance intervals of 15 years. Implementing a DCB solution also reduces the space requirements within the substation, and increases the reliability, due to the lack of separate disconnectors.
In order to further reduce the required space of substation, as well as simplifying the design and engineering of the substation, a fiber optic current sensor (FOCS) can be integrated with the DCB. A 420 kV DCB with integrated FOCS can reduce a substation's footprint with over 50% compared to a conventional solution of live tank breakers with disconnectors and current transformers, due to reduced material and no additional insulation medium.
Carbon dioxide (CO2) high-voltage
In 2012, ABB presented a 75kV high-voltage breaker that uses carbon dioxide as the medium to extinguish the arc. The carbon dioxide breaker works on the same principles as an SF6 breaker and can also be produced as a disconnecting circuit breaker. By switching from SF6 to CO2, it is possible to reduce the CO2 emissions by 10 tons during the product's life cycle.
"Smart" circuit breakers
Several firms have looked at adding monitoring for appliances via electronics or using a digital circuit breaker to monitor the breakers remotely. Utility companies in the United States have been reviewing use of the technology to turn appliances on and off, as well as potentially turning off charging of electric cars during periods of high electrical grid load. These devices under research and testing would have wireless capability to monitor the electrical usage in a house via a smartphone app or other means.
Other breakers
The following types are described in separate articles.
Breakers for protections against earth faults too small to trip an over-current device:
Residual-current device (RCD), or residual-current circuit breaker (RCCB) — detects current imbalance, but does not provide over-current protection. In the United States and Canada, these are called ground fault circuit interrupters (GFCI).
Residual-current circuit breaker with overcurrent protection (RCBO) — combines the functions of a RCD and a MCB in one package. In the United States and Canada, these are called GFCI breakers.
Earth leakage circuit breaker (ELCB) — This detects current in the earth wire directly rather than detecting imbalance. They are no longer seen in new installations as they cannot detect any dangerous condition where the current is returning to earth by another routesuch as via a person on the ground or via plumbing. (also called VOELCB in the UK).
Arc-fault circuit interrupter (AFCI) or arc-fault detection device (AFDD) — detects electric arcs from the likes of loose wires.
Recloser — A type of circuit breaker that closes automatically after a delay. These are used on overhead electric power distribution systems, to prevent short duration faults from causing sustained outages.
Polyswitch (polyfuse) — A small device commonly described as an automatically resetting fuse rather than a circuit breaker.
| Technology | Components | null |
235926 | https://en.wikipedia.org/wiki/DNA%20polymerase | DNA polymerase | A DNA polymerase is a member of a family of enzymes that catalyze the synthesis of DNA molecules from nucleoside triphosphates, the molecular precursors of DNA. These enzymes are essential for DNA replication and usually work in groups to create two identical DNA duplexes from a single original DNA duplex. During this process, DNA polymerase "reads" the existing DNA strands to create two new strands that match the existing ones.
These enzymes catalyze the chemical reaction
deoxynucleoside triphosphate + DNAn pyrophosphate + DNAn+1.
DNA polymerase adds nucleotides to the three prime (3')-end of a DNA strand, one nucleotide at a time. Every time a cell divides, DNA polymerases are required to duplicate the cell's DNA, so that a copy of the original DNA molecule can be passed to each daughter cell. In this way, genetic information is passed down from generation to generation.
Before replication can take place, an enzyme called helicase unwinds the DNA molecule from its tightly woven form, in the process breaking the hydrogen bonds between the nucleotide bases. This opens up or "unzips" the double-stranded DNA to give two single strands of DNA that can be used as templates for replication in the above reaction.
History
In 1956, Arthur Kornberg and colleagues discovered DNA polymerase I (Pol I), in Escherichia coli. They described the DNA replication process by which DNA polymerase copies the base sequence of a template DNA strand. Kornberg was later awarded the Nobel Prize in Physiology or Medicine in 1959 for this work. DNA polymerase II was discovered by Thomas Kornberg (the son of Arthur Kornberg) and Malcolm E. Gefter in 1970 while further elucidating the role of Pol I in E. coli DNA replication. Three more DNA polymerases have been found in E. coli, including DNA polymerase III (discovered in the 1970s) and DNA polymerases IV and V (discovered in 1999). From 1983 on, DNA polymerases have been used in the polymerase chain reaction (PCR), and from 1988 thermostable DNA polymerases were used instead, as they do not need to be added in every cycle of a PCR.
Function
The main function of DNA polymerase is to synthesize DNA from deoxyribonucleotides, the building blocks of DNA. The DNA copies are created by the pairing of nucleotides to bases present on each strand of the original DNA molecule. This pairing always occurs in specific combinations, with cytosine along with guanine, and thymine along with adenine, forming two separate pairs, respectively. By contrast, RNA polymerases synthesize RNA from ribonucleotides from either RNA or DNA.
When synthesizing new DNA, DNA polymerase can add free nucleotides only to the 3' end of the newly forming strand. This results in elongation of the newly forming strand in a 5'–3' direction.
It is important to note that the directionality of the newly forming strand (the daughter strand) is opposite to the direction in which DNA polymerase moves along the template strand. Since DNA polymerase requires a free 3' OH group for initiation of synthesis, it can synthesize in only one direction by extending the 3' end of the preexisting nucleotide chain. Hence, DNA polymerase moves along the template strand in a 3'–5' direction, and the daughter strand is formed in a 5'–3' direction. This difference enables the resultant double-strand DNA formed to be composed of two DNA strands that are antiparallel to each other.
The function of DNA polymerase is not quite perfect, with the enzyme making about one mistake for every billion base pairs copied. Error correction is a property of some, but not all DNA polymerases. This process corrects mistakes in newly synthesized DNA. When an incorrect base pair is recognized, DNA polymerase moves backwards by one base pair of DNA. The 3'–5' exonuclease activity of the enzyme allows the incorrect base pair to be excised (this activity is known as proofreading). Following base excision, the polymerase can re-insert the correct base and replication can continue forwards. This preserves the integrity of the original DNA strand that is passed onto the daughter cells.
Fidelity is very important in DNA replication. Mismatches in DNA base pairing can potentially result in dysfunctional proteins and could lead to cancer. Many DNA polymerases contain an exonuclease domain, which acts in detecting base pair mismatches and further performs in the removal of the incorrect nucleotide to be replaced by the correct one. The shape and the interactions accommodating the Watson and Crick base pair are what primarily contribute to the detection or error. Hydrogen bonds play a key role in base pair binding and interaction. The loss of an interaction, which occurs at a mismatch, is said to trigger a shift in the balance, for the binding of the template-primer, from the polymerase, to the exonuclease domain. In addition, an incorporation of a wrong nucleotide causes a retard in DNA polymerization. This delay gives time for the DNA to be switched from the polymerase site to the exonuclease site. Different conformational changes and loss of interaction occur at different mismatches. In a purine:pyrimidine mismatch there is a displacement of the pyrimidine towards the major groove and the purine towards the minor groove. Relative to the shape of DNA polymerase's binding pocket, steric clashes occur between the purine and residues in the minor groove, and important van der Waals and electrostatic interactions are lost by the pyrimidine. Pyrimidine:pyrimidine and purine:purine mismatches present less notable changes since the bases are displaced towards the major groove, and less steric hindrance is experienced. However, although the different mismatches result in different steric properties, DNA polymerase is still able to detect and differentiate them so uniformly and maintain fidelity in DNA replication. DNA polymerization is also critical for many mutagenesis processes and is widely employed in biotechnologies.
Structure
The known DNA polymerases have highly conserved structure, which means that their overall catalytic subunits vary very little from species to species, independent of their domain structures. Conserved structures usually indicate important, irreplaceable functions of the cell, the maintenance of which provides evolutionary advantages. The shape can be described as resembling a right hand with thumb, finger, and palm domains. The palm domain appears to function in catalyzing the transfer of phosphoryl groups in the phosphoryl transfer reaction. DNA is bound to the palm when the enzyme is active. This reaction is believed to be catalyzed by a two-metal-ion mechanism. The finger domain functions to bind the nucleoside triphosphates with the template base. The thumb domain plays a potential role n the processivity, translocation, and positioning of the DNA.
Processivity
DNA polymerase's rapid catalysis due to its processive nature. Processivity is a characteristic of enzymes that function on polymeric substrates. In the case of DNA polymerase, the degree of processivity refers to the average number of nucleotides added each time the enzyme binds a template. The average DNA polymerase requires about one second locating and binding a primer/template junction. Once it is bound, a nonprocessive DNA polymerase adds nucleotides at a rate of one nucleotide per second. Processive DNA polymerases, however, add multiple nucleotides per second, drastically increasing the rate of DNA synthesis. The degree of processivity is directly proportional to the rate of DNA synthesis. The rate of DNA synthesis in a living cell was first determined as the rate of phage T4 DNA elongation in phage infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second.
DNA polymerase's ability to slide along the DNA template allows increased processivity. There is a dramatic increase in processivity at the replication fork. This increase is facilitated by the DNA polymerase's association with proteins known as the sliding DNA clamp. The clamps are multiple protein subunits associated in the shape of a ring. Using the hydrolysis of ATP, a class of proteins known as the sliding clamp loading proteins open up the ring structure of the sliding DNA clamps allowing binding to and release from the DNA strand. Protein–protein interaction with the clamp prevents DNA polymerase from diffusing from the DNA template, thereby ensuring that the enzyme binds the same primer/template junction and continues replication. DNA polymerase changes conformation, increasing affinity to the clamp when associated with it and decreasing affinity when it completes the replication of a stretch of DNA to allow release from the clamp.
DNA polymerase processivity has been studied with in vitro single-molecule experiments (namely, optical tweezers and magnetic tweezers) have revealed the synergies between DNA polymerases and other molecules of the replisome (helicases and SSBs) and with the DNA replication fork. These results have led to the development of synergetic kinetic models for DNA replication describing the resulting DNA polymerase processivity increase.
Variation across species
Based on sequence homology, DNA polymerases can be further subdivided into seven different families: A, B, C, D, X, Y, and RT.
Some viruses also encode special DNA polymerases, such as Hepatitis B virus DNA polymerase. These may selectively replicate viral DNA through a variety of mechanisms. Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp). It polymerizes DNA from a template of RNA.
Prokaryotic polymerase
Prokaryotic polymerases exist in two forms: core polymerase and holoenzyme. Core polymerase synthesizes DNA from the DNA template but it cannot initiate the synthesis alone or accurately. Holoenzyme accurately initiates synthesis.
Pol I
Prokaryotic family A polymerases include the DNA polymerase I (Pol I) enzyme, which is encoded by the polA gene and ubiquitous among prokaryotes. This repair polymerase is involved in excision repair with both 3'–5' and 5'–3' exonuclease activity and processing of Okazaki fragments generated during lagging strand synthesis. Pol I is the most abundant polymerase, accounting for >95% of polymerase activity in E. coli; yet cells lacking Pol I have been found suggesting Pol I activity can be replaced by the other four polymerases. Pol I adds ~15-20 nucleotides per second, thus showing poor processivity. Instead, Pol I starts adding nucleotides at the RNA primer:template junction known as the origin of replication (ori). Approximately 400 bp downstream from the origin, the Pol III holoenzyme is assembled and takes over replication at a highly processive speed and nature.
Taq polymerase is a heat-stable enzyme of this family that lacks proofreading ability.
Pol II
DNA polymerase II is a family B polymerase encoded by the polB gene. Pol II has 3'–5' exonuclease activity and participates in DNA repair, replication restart to bypass lesions, and its cell presence can jump from ~30-50 copies per cell to ~200–300 during SOS induction. Pol II is also thought to be a backup to Pol III as it can interact with holoenzyme proteins and assume a high level of processivity. The main role of Pol II is thought to be the ability to direct polymerase activity at the replication fork and help stalled Pol III bypass terminal mismatches.
Pfu DNA polymerase is a heat-stable enzyme of this family found in the hyperthermophilic archaeon Pyrococcus furiosus. Detailed classification divides family B in archaea into B1, B2, B3, in which B2 is a group of pseudoenzymes. Pfu belongs to family B3. Others PolBs found in archaea are part of "Casposons", Cas1-dependent transposons. Some viruses (including Φ29 DNA polymerase) and mitochondrial plasmids carry polB as well.
Pol III
DNA polymerase III holoenzyme is the primary enzyme involved in DNA replication in E. coli and belongs to family C polymerases. It consists of three assemblies: the pol III core, the beta sliding clamp processivity factor, and the clamp-loading complex. The core consists of three subunits: α, the polymerase activity hub, ɛ, exonucleolytic proofreader, and θ, which may act as a stabilizer for ɛ. The beta sliding clamp processivity factor is also present in duplicate, one for each core, to create a clamp that encloses DNA allowing for high processivity. The third assembly is a seven-subunit (τ2γδδχψ) clamp loader complex.
The old textbook "trombone model" depicts an elongation complex with two equivalents of the core enzyme at each replication fork (RF), one for each strand, the lagging and leading. However, recent evidence from single-molecule studies indicates an average of three stoichiometric equivalents of core enzyme at each RF for both Pol III and its counterpart in B. subtilis, PolC. In-cell fluorescent microscopy has revealed that leading strand synthesis may not be completely continuous, and Pol III* (i.e., the holoenzyme α, ε, τ, δ and χ subunits without the ß2 sliding clamp) has a high frequency of dissociation from active RFs. In these studies, the replication fork turnover rate was about 10s for Pol III*, 47s for the ß2 sliding clamp, and 15m for the DnaB helicase. This suggests that the DnaB helicase may remain stably associated at RFs and serve as a nucleation point for the competent holoenzyme. In vitro single-molecule studies have shown that Pol III* has a high rate of RF turnover when in excess, but remains stably associated with replication forks when concentration is limiting. Another single-molecule study showed that DnaB helicase activity and strand elongation can proceed with decoupled, stochastic kinetics.
Pol IV
In E. coli, DNA polymerase IV (Pol IV) is an error-prone DNA polymerase involved in non-targeted mutagenesis. Pol IV is a Family Y polymerase expressed by the dinB gene that is switched on via SOS induction caused by stalled polymerases at the replication fork. During SOS induction, Pol IV production is increased tenfold and one of the functions during this time is to interfere with Pol III holoenzyme processivity. This creates a checkpoint, stops replication, and allows time to repair DNA lesions via the appropriate repair pathway. Another function of Pol IV is to perform translesion synthesis at the stalled replication fork like, for example, bypassing N2-deoxyguanine adducts at a faster rate than transversing undamaged DNA. Cells lacking the dinB gene have a higher rate of mutagenesis caused by DNA damaging agents.
Pol V
DNA polymerase V (Pol V) is a Y-family DNA polymerase that is involved in SOS response and translesion synthesis DNA repair mechanisms. Transcription of Pol V via the umuDC genes is highly regulated to produce only Pol V when damaged DNA is present in the cell generating an SOS response. Stalled polymerases causes RecA to bind to the ssDNA, which causes the LexA protein to autodigest. LexA then loses its ability to repress the transcription of the umuDC operon. The same RecA-ssDNA nucleoprotein posttranslationally modifies the UmuD protein into UmuD' protein. UmuD and UmuD' form a heterodimer that interacts with UmuC, which in turn activates umuC's polymerase catalytic activity on damaged DNA. In E. coli, a polymerase "tool belt" model for switching pol III with pol IV at a stalled replication fork, where both polymerases bind simultaneously to the β-clamp, has been proposed. However, the involvement of more than one TLS polymerase working in succession to bypass a lesion has not yet been shown in E. coli. Moreover, Pol IV can catalyze both insertion and extension with high efficiency, whereas pol V is considered the major SOS TLS polymerase. One example is the bypass of intra strand guanine thymine cross-link where it was shown on the basis of the difference in the mutational signatures of the two polymerases, that pol IV and pol V compete for TLS of the intra-strand crosslink.
Family D
In 1998, the family D of DNA polymerase was discovered in Pyrococcus furiosus and Methanococcus jannaschii. The PolD complex is a heterodimer of two chains, each encoded by DP1 (small proofreading) and DP2 (large catalytic). Unlike other DNA polymerases, the structure and mechanism of the DP2 catalytic core resemble that of multi-subunit RNA polymerases. The DP1-DP2 interface resembles that of Eukaryotic Class B polymerase zinc finger and its small subunit. DP1, a Mre11-like exonuclease, is likely the precursor of small subunit of Pol α and ε, providing proofreading capabilities now lost in Eukaryotes. Its N-terminal HSH domain is similar to AAA proteins, especially Pol III subunit δ and RuvB, in structure. DP2 has a Class II KH domain. Pyrococcus abyssi polD is more heat-stable and more accurate than Taq polymerase, but has not yet been commercialized. It has been proposed that family D DNA polymerase was the first to evolve in cellular organisms and that the replicative polymerase of the Last Universal Cellular Ancestor (LUCA) belonged to family D.
Eukaryotic DNA polymerase
Polymerases β, λ, σ, μ (beta, lambda, sigma, mu) and TdT
Family X polymerases contain the well-known eukaryotic polymerase pol β (beta), as well as other eukaryotic polymerases such as Pol σ (sigma), Pol λ (lambda), Pol μ (mu), and Terminal deoxynucleotidyl transferase (TdT). Family X polymerases are found mainly in vertebrates, and a few are found in plants and fungi. These polymerases have highly conserved regions that include two helix-hairpin-helix motifs that are imperative in the DNA-polymerase interactions. One motif is located in the 8 kDa domain that interacts with downstream DNA and one motif is located in the thumb domain that interacts with the primer strand. Pol β, encoded by POLB gene, is required for short-patch base excision repair, a DNA repair pathway that is essential for repairing alkylated or oxidized bases as well as abasic sites. Pol λ and Pol μ, encoded by the POLL and POLM genes respectively, are involved in non-homologous end-joining, a mechanism for rejoining DNA double-strand breaks due to hydrogen peroxide and ionizing radiation, respectively. TdT is expressed only in lymphoid tissue, and adds "n nucleotides" to double-strand breaks formed during V(D)J recombination to promote immunological diversity.
Polymerases α, δ and ε (alpha, delta, and epsilon)
Pol α (alpha), Pol δ (delta), and Pol ε (epsilon) are members of Family B Polymerases and are the main polymerases involved with nuclear DNA replication. Pol α complex (pol α-DNA primase complex) consists of four subunits: the catalytic subunit POLA1, the regulatory subunit POLA2, and the small and the large primase subunits PRIM1 and PRIM2 respectively. Once primase has created the RNA primer, Pol α starts replication elongating the primer with ~20 nucleotides. Due to its high processivity, Pol δ takes over the leading and lagging strand synthesis from Pol α. Pol δ is expressed by genes POLD1, creating the catalytic subunit, POLD2, POLD3, and POLD4 creating the other subunits that interact with Proliferating Cell Nuclear Antigen (PCNA), which is a DNA clamp that allows Pol δ to possess processivity. Pol ε is encoded by the POLE1, the catalytic subunit, POLE2, and POLE3 gene. It has been reported that the function of Pol ε is to extend the leading strand during replication, while Pol δ primarily replicates the lagging strand; however, recent evidence suggested that Pol δ might have a role in replicating the leading strand of DNA as well. Pol ε's C-terminus "polymerase relic" region, despite being unnecessary for polymerase activity, is thought to be essential to cell vitality. The C-terminus region is thought to provide a checkpoint before entering anaphase, provide stability to the holoenzyme, and add proteins to the holoenzyme necessary for initiation of replication. Pol ε has a larger "palm" domain that provides high processivity independently of PCNA.
Compared to other Family B polymerases, the DEDD exonuclease family responsible for proofreading is inactivated in Pol α. Pol ε is unique in that it has two zinc finger domains and an inactive copy of another family B polymerase in its C-terminal. The presence of this zinc finger has implications in the origins of Eukaryota, which in this case is placed into the Asgard group with archaeal B3 polymerase.
Polymerases η, ι and κ (eta, iota, and kappa)
Pol η (eta), Pol ι (iota), and Pol κ (kappa), are Family Y DNA polymerases involved in the DNA repair by translation synthesis and encoded by genes POLH, POLI, and POLK respectively. Members of Family Y have five common motifs to aid in binding the substrate and primer terminus and they all include the typical right hand thumb, palm and finger domains with added domains like little finger (LF), polymerase-associated domain (PAD), or wrist. The active site, however, differs between family members due to the different lesions being repaired. Polymerases in Family Y are low-fidelity polymerases, but have been proven to do more good than harm as mutations that affect the polymerase can cause various diseases, such as skin cancer and Xeroderma Pigmentosum Variant (XPS). The importance of these polymerases is evidenced by the fact that gene encoding DNA polymerase η is referred as XPV, because loss of this gene results in the disease Xeroderma Pigmentosum Variant. Pol η is particularly important for allowing accurate translesion synthesis of DNA damage resulting from ultraviolet radiation. The functionality of Pol κ is not completely understood, but researchers have found two probable functions. Pol κ is thought to act as an extender or an inserter of a specific base at certain DNA lesions. All three translesion synthesis polymerases, along with Rev1, are recruited to damaged lesions via stalled replicative DNA polymerases. There are two pathways of damage repair leading researchers to conclude that the chosen pathway depends on which strand contains the damage, the leading or lagging strand.
Polymerases Rev1 and ζ (zeta)
Pol ζ another B family polymerase, is made of two subunits Rev3, the catalytic subunit, and Rev7 (MAD2L2), which increases the catalytic function of the polymerase, and is involved in translesion synthesis. Pol ζ lacks 3' to 5' exonuclease activity, is unique in that it can extend primers with terminal mismatches. Rev1 has three regions of interest in the BRCT domain, ubiquitin-binding domain, and C-terminal domain and has dCMP transferase ability, which adds deoxycytidine opposite lesions that would stall replicative polymerases Pol δ and Pol ε. These stalled polymerases activate ubiquitin complexes that in turn disassociate replication polymerases and recruit Pol ζ and Rev1. Together Pol ζ and Rev1 add deoxycytidine and Pol ζ extends past the lesion. Through a yet undetermined process, Pol ζ disassociates and replication polymerases reassociate and continue replication. Pol ζ and Rev1 are not required for replication, but loss of REV3 gene in budding yeast can cause increased sensitivity to DNA-damaging agents due to collapse of replication forks where replication polymerases have stalled.
Telomerase
Telomerase is a ribonucleoprotein which functions to replicate ends of linear chromosomes since normal DNA polymerase cannot replicate the ends, or telomeres. The single-strand 3' overhang of the double-strand chromosome with the sequence 5'-TTAGGG-3' recruits telomerase. Telomerase acts like other DNA polymerases by extending the 3' end, but, unlike other DNA polymerases, telomerase does not require a template. The TERT subunit, an example of a reverse transcriptase, uses the RNA subunit to form the primer–template junction that allows telomerase to extend the 3' end of chromosome ends. The gradual decrease in size of telomeres as the result of many replications over a lifetime are thought to be associated with the effects of aging.
Polymerases γ, θ and ν (gamma, theta and nu)
Pol γ (gamma), Pol θ (theta), and Pol ν (nu) are Family A polymerases. Pol γ, encoded by the POLG gene, was long thought to be the only mitochondrial polymerase. However, recent research shows that at least Pol β (beta), a Family X polymerase, is also present in mitochondria. Any mutation that leads to limited or non-functioning Pol γ has a significant effect on mtDNA and is the most common cause of autosomal inherited mitochondrial disorders. Pol γ contains a C-terminus polymerase domain and an N-terminus 3'–5' exonuclease domain that are connected via the linker region, which binds the accessory subunit. The accessory subunit binds DNA and is required for processivity of Pol γ. Point mutation A467T in the linker region is responsible for more than one-third of all Pol γ-associated mitochondrial disorders. While many homologs of Pol θ, encoded by the POLQ gene, are found in eukaryotes, its function is not clearly understood. The sequence of amino acids in the C-terminus is what classifies Pol θ as Family A polymerase, although the error rate for Pol θ is more closely related to Family Y polymerases. Pol θ extends mismatched primer termini and can bypass abasic sites by adding a nucleotide. It also has Deoxyribophosphodiesterase (dRPase) activity in the polymerase domain and can show ATPase activity in close proximity to ssDNA. Pol ν (nu) is considered to be the least effective of the polymerase enzymes. However, DNA polymerase nu plays an active role in homology repair during cellular responses to crosslinks, fulfilling its role in a complex with helicase.
Plants use two Family A polymerases to copy both the mitochondrial and plastid genomes. They are more similar to bacterial Pol I than they are to mammalian Pol γ.
Reverse transcriptase
Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp) that synthesizes DNA from a template of RNA. The reverse transcriptase family contain both DNA polymerase functionality and RNase H functionality, which degrades RNA base-paired to DNA. An example of a retrovirus is HIV. Reverse transcriptase is commonly employed in amplification of RNA for research purposes. Using an RNA template, PCR can utilize reverse transcriptase, creating a DNA template. This new DNA template can then be used for typical PCR amplification. The products of such an experiment are thus amplified PCR products from RNA.
Each HIV retrovirus particle contains two RNA genomes, but, after an infection, each virus generates only one provirus. After infection, reverse transcription is accompanied by template switching between the two genome copies (copy choice recombination). From 5 to 14 recombination events per genome occur at each replication cycle. Template switching (recombination) appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes.
Bacteriophage T4 DNA polymerase
Bacteriophage (phage) T4 encodes a DNA polymerase that catalyzes DNA synthesis in a 5' to 3' direction. The phage polymerase also has an exonuclease activity that acts in a 3' to 5' direction, and this activity is employed in the proofreading and editing of newly inserted bases. A phage mutant with a temperature sensitive DNA polymerase, when grown at permissive temperatures, was observed to undergo recombination at frequencies that are about two-fold higher than that of wild-type phage.
It was proposed that a mutational alteration in the phage DNA polymerase can stimulate template strand switching (copy choice recombination) during replication.
| Biology and health sciences | Molecular biology | Biology |
235968 | https://en.wikipedia.org/wiki/Hydrogenation | Hydrogenation | Hydrogenation is a chemical reaction between molecular hydrogen (H2) and another compound or element, usually in the presence of a catalyst such as nickel, palladium or platinum. The process is commonly employed to reduce or saturate organic compounds. Hydrogenation typically constitutes the addition of pairs of hydrogen atoms to a molecule, often an alkene. Catalysts are required for the reaction to be usable; non-catalytic hydrogenation takes place only at very high temperatures. Hydrogenation reduces double and triple bonds in hydrocarbons.
Process
Hydrogenation has three components, the unsaturated substrate, the hydrogen (or hydrogen source) and, invariably, a catalyst. The reduction reaction is carried out at different temperatures and pressures depending upon the substrate and the activity of the catalyst.
Related or competing reactions
The same catalysts and conditions that are used for hydrogenation reactions can also lead to isomerization of the alkenes from cis to trans. This process is of great interest because hydrogenation technology generates most of the trans fat in foods. A reaction where bonds are broken while hydrogen is added is called hydrogenolysis, a reaction that may occur to carbon-carbon and carbon-heteroatom (oxygen, nitrogen or halogen) bonds. Some hydrogenations of polar bonds are accompanied by hydrogenolysis.
Hydrogen sources
For hydrogenation, the obvious source of hydrogen is gas itself, which is typically available commercially within the storage medium of a pressurized cylinder. The hydrogenation process often uses greater than 1 atmosphere of , usually conveyed from the cylinders and sometimes augmented by "booster pumps". Gaseous hydrogen is produced industrially from hydrocarbons by the process known as steam reforming. For many applications, hydrogen is transferred from donor molecules such as formic acid, isopropanol, and dihydroanthracene. These hydrogen donors undergo dehydrogenation to, respectively, carbon dioxide, acetone, and anthracene. These processes are called transfer hydrogenations.
Substrates
An important characteristic of alkene and alkyne hydrogenations, both the homogeneously and heterogeneously catalyzed versions, is that hydrogen addition occurs with "syn addition", with hydrogen entering from the least hindered side. This reaction can be performed on a variety of different functional groups.
Catalysts
With rare exceptions, is unreactive toward organic compounds in the absence of metal catalysts. The unsaturated substrate is chemisorbed onto the catalyst, with most sites covered by the substrate. In heterogeneous catalysts, hydrogen forms surface hydrides (M-H) from which hydrogens can be transferred to the chemisorbed substrate. Platinum, palladium, rhodium, and ruthenium form highly active catalysts, which operate at lower temperatures and lower pressures of . Non-precious metal catalysts, especially those based on nickel (such as Raney nickel and Urushibara nickel) have also been developed as economical alternatives, but they are often slower or require higher temperatures. The trade-off is activity (speed of reaction) vs. cost of the catalyst and cost of the apparatus required for use of high pressures. Notice that the Raney-nickel catalysed hydrogenations require high pressures:
Catalysts are usually classified into two broad classes: homogeneous and heterogeneous. Homogeneous catalysts dissolve in the solvent that contains the unsaturated substrate. Heterogeneous catalysts are solids that are suspended in the same solvent with the substrate or are treated with gaseous substrate.
Homogeneous catalysts
Some well known homogeneous catalysts are indicated below. These are coordination complexes that activate both the unsaturated substrate and the . Most typically, these complexes contain platinum group metals, especially Rh and Ir.
Homogeneous catalysts are also used in asymmetric synthesis by the hydrogenation of prochiral substrates. An early demonstration of this approach was the Rh-catalyzed hydrogenation of enamides as precursors to the drug . To achieve asymmetric reduction, these catalyst are made chiral by use of chiral diphosphine ligands. Rhodium catalyzed hydrogenation has also been used in the herbicide production of S-metolachlor, which uses a Josiphos type ligand (called Xyliphos). In principle asymmetric hydrogenation can be catalyzed by chiral heterogeneous catalysts, but this approach remains more of a curiosity than a useful technology.
Heterogeneous catalysts
Heterogeneous catalysts for hydrogenation are more common industrially. In industry, precious metal hydrogenation catalysts are deposited from solution as a fine powder on the support, which is a cheap, bulky, porous, usually granular material, such as activated carbon, alumina, calcium carbonate or barium sulfate. For example, platinum on carbon is produced by reduction of chloroplatinic acid in situ in carbon. Examples of these catalysts are 5% ruthenium on activated carbon, or 1% platinum on alumina. Base metal catalysts, such as Raney nickel, are typically much cheaper and do not need a support. Also, in the laboratory, unsupported (massive) precious metal catalysts such as platinum black are still used, despite the cost.
As in homogeneous catalysts, the activity is adjusted through changes in the environment around the metal, i.e. the coordination sphere. Different faces of a crystalline heterogeneous catalyst display distinct activities, for example. This can be modified by mixing metals or using different preparation techniques. Similarly, heterogeneous catalysts are affected by their supports.
In many cases, highly empirical modifications involve selective "poisons". Thus, a carefully chosen catalyst can be used to hydrogenate some functional groups without affecting others, such as the hydrogenation of alkenes without touching aromatic rings, or the selective hydrogenation of alkynes to alkenes using Lindlar's catalyst. For example, when the catalyst palladium is placed on barium sulfate and then treated with quinoline, the resulting catalyst reduces alkynes only as far as alkenes. The Lindlar catalyst has been applied to the conversion of phenylacetylene to styrene.
Transfer hydrogenation
Transfer hydrogenation uses hydrogen-donor molecules other than molecular . These "sacrificial" hydrogen donors, which can also serve as solvents for the reaction, include hydrazine, formic acid, and alcohols such as isopropanol.
In organic synthesis, transfer hydrogenation is useful for the asymmetric hydrogenation of polar unsaturated substrates, such as ketones, aldehydes and imines, by employing chiral catalysts.
Electrolytic hydrogenation
Polar substrates such as nitriles can be hydrogenated electrochemically, using protic solvents and reducing equivalents as the source of hydrogen.
Thermodynamics and mechanism
The addition of hydrogen to double or triple bonds in hydrocarbons is a type of redox reaction that can be thermodynamically favorable. For example, the addition of hydrogen to ethene has a Gibbs free energy change of -101 kJ·mol−1, which is highly exothermic. In the hydrogenation of vegetable oils and fatty acids, for example, the heat released, about 25 kcal per mole (105 kJ/mol), is sufficient to raise the temperature of the oil by 1.6–1.7 °C per iodine number drop.
However, the reaction rate for most hydrogenation reactions is negligible in the absence of catalysts. The mechanism of metal-catalyzed hydrogenation of alkenes and alkynes has been extensively studied. First of all isotope labeling using deuterium confirms the regiochemistry of the addition:
RCH=CH2 + D2 -> RCHDCH2D
Heterogeneous catalysis
On solids, the accepted mechanism is the Horiuti-Polanyi mechanism:
Binding of the unsaturated bond
Dissociation of on the catalyst
Addition of one atom of hydrogen; this step is reversible
Addition of the second atom; effectively irreversible.
In the third step, the alkyl group can revert to alkene, which can detach from the catalyst. Consequently, contact with a hydrogenation catalyst allows cis-trans-isomerization. The trans-alkene can reassociate to the surface and undergo hydrogenation. These details are revealed in part using D2 (deuterium), because recovered alkenes often contain deuterium.
For aromatic substrates, the first hydrogenation is slowest. The product of this step is a cyclohexadiene, which hydrogenate rapidly and are rarely detected. Similarly, the cyclohexene is ordinarily reduced to cyclohexane.
Homogeneous catalysis
In many homogeneous hydrogenation processes, the metal binds to both components to give an intermediate alkene-metal(H)2 complex. The general sequence of reactions is assumed to be as follows or a related sequence of steps:
binding of the hydrogen to give a dihydride complex via oxidative addition (preceding the oxidative addition of is the formation of a dihydrogen complex):
binding of alkene:
transfer of one hydrogen atom from the metal to carbon (migratory insertion):
transfer of the second hydrogen atom from the metal to the alkyl group with simultaneous dissociation of the alkane ("reductive elimination")
Alkene isomerization often accompanies hydrogenation. This important side reaction proceeds by beta-hydride elimination of the alkyl hydride intermediate:
Often the released olefin is trans.
Inorganic substrates
The hydrogenation of nitrogen to give ammonia is conducted on a vast scale by the Haber–Bosch process, consuming an estimated 1% of the world's energy supply.
\underset{nitrogen}{N{\equiv}N} + \underset{hydrogen\atop (200 atm)}{3H2} ->[\ce{Fe\ catalyst}][350-550^\circ\ce C] \underset{ammonia}{2NH3}
Oxygen can be partially hydrogenated to give hydrogen peroxide, although this process has not been commercialized. One difficulty is preventing the catalysts from triggering decomposition of the hydrogen peroxide to form water.
Industrial applications
Catalytic hydrogenation has diverse industrial uses. Most frequently, industrial hydrogenation relies on heterogeneous catalysts.
Food industry
The food industry hydrogenates vegetable oils to convert them into solid or semi-solid fats that can be used in spreads, candies, baked goods, and other products like margarine. Vegetable oils are made from polyunsaturated fatty acids (having more than one carbon-carbon double bond). Hydrogenation eliminates some of these double bonds.
Petrochemical industry
In petrochemical processes, hydrogenation is used to convert alkenes and aromatics into saturated alkanes (paraffins) and cycloalkanes (naphthenes), which are less toxic and less reactive. Relevant to liquid fuels that are stored sometimes for long periods in air, saturated hydrocarbons exhibit superior storage properties. On the other hand, alkenes tend to form hydroperoxides, which can form gums that interfere with fuel handling equipment. For example, mineral turpentine is usually hydrogenated. Hydrocracking of heavy residues into diesel is another application. In isomerization and catalytic reforming processes, some hydrogen pressure is maintained to hydrogenolyze coke formed on the catalyst and prevent its accumulation.
Organic chemistry
Hydrogenation is a useful means for converting unsaturated compounds into saturated derivatives. Substrates include not only alkenes and alkynes, but also aldehydes, imines, and nitriles, which are converted into the corresponding saturated compounds, i.e. alcohols and amines. Thus, alkyl aldehydes, which can be synthesized with the oxo process from carbon monoxide and an alkene, can be converted to alcohols. E.g. 1-propanol is produced from propionaldehyde, produced from ethene and carbon monoxide. Xylitol, a polyol, is produced by hydrogenation of the sugar xylose, an aldehyde. Primary amines can be synthesized by hydrogenation of nitriles, while nitriles are readily synthesized from cyanide and a suitable electrophile. For example, isophorone diamine, a precursor to the polyurethane monomer isophorone diisocyanate, is produced from isophorone nitrile by a tandem nitrile hydrogenation/reductive amination by ammonia, wherein hydrogenation converts both the nitrile into an amine and the imine formed from the aldehyde and ammonia into another amine.
Hydrogenation of coal
History
Heterogeneous catalytic hydrogenation
The earliest hydrogenation was that of the platinum-catalyzed addition of hydrogen to oxygen in the Döbereiner's lamp, a device commercialized as early as 1823. The French chemist Paul Sabatier is considered the father of the hydrogenation process. In 1897, building on the earlier work of James Boyce, an American chemist working in the manufacture of soap products, he discovered that traces of nickel catalyzed the addition of hydrogen to molecules of gaseous hydrocarbons in what is now known as the Sabatier process. For this work, Sabatier shared the 1912 Nobel Prize in Chemistry. Wilhelm Normann was awarded a patent in Germany in 1902 and in Britain in 1903 for the hydrogenation of liquid oils, which was the beginning of what is now a worldwide industry. The commercially important Haber–Bosch process, first described in 1905, involves hydrogenation of nitrogen. In the Fischer–Tropsch process, reported in 1922 carbon monoxide, which is easily derived from coal, is hydrogenated to liquid fuels.
In 1922, Voorhees and Adams described an apparatus for performing hydrogenation under pressures above one atmosphere. The Parr shaker, the first product to allow hydrogenation using elevated pressures and temperatures, was commercialized in 1926 based on Voorhees and Adams' research and remains in widespread use. In 1924 Murray Raney developed a finely powdered form of nickel, which is widely used to catalyze hydrogenation reactions such as conversion of nitriles to amines or the production of margarine.
Homogeneous catalytic hydrogenation
In the 1930s, Calvin discovered that copper(II) complexes oxidized H2. The 1960s witnessed the development of well defined homogeneous catalysts using transition metal complexes, e.g., Wilkinson's catalyst (RhCl(PPh3)3). Soon thereafter cationic Rh and Ir were found to catalyze the hydrogenation of alkenes and carbonyls. In the 1970s, asymmetric hydrogenation was demonstrated in the synthesis of , and the 1990s saw the invention of Noyori asymmetric hydrogenation. The development of homogeneous hydrogenation was influenced by work started in the 1930s and 1940s on the oxo process and Ziegler–Natta polymerization.
Metal-free hydrogenation
For most practical purposes, hydrogenation requires a metal catalyst. Hydrogenation can, however, proceed from some hydrogen donors without catalysts, illustrative hydrogen donors being diimide and aluminium isopropoxide, the latter illustrated by the Meerwein–Ponndorf–Verley reduction. Some metal-free catalytic systems have been investigated in academic research. One such system for reduction of ketones consists of tert-butanol and potassium tert-butoxide and very high temperatures. The reaction depicted below describes the hydrogenation of benzophenone:
A chemical kinetics study found this reaction is first-order in all three reactants suggesting a cyclic 6-membered transition state.
Another system for metal-free hydrogenation is based on the phosphine-borane, compound 1, which has been called a frustrated Lewis pair. It reversibly accepts dihydrogen at relatively low temperatures to form the phosphonium borate 2 which can reduce simple hindered imines.
The reduction of nitrobenzene to aniline has been reported to be catalysed by fullerene, its mono-anion, atmospheric hydrogen and UV light.
Equipment used for hydrogenation
Today's bench chemist has three main choices of hydrogenation equipment:
Batch hydrogenation under atmospheric conditions
Batch hydrogenation at elevated temperature and/or pressure
Flow hydrogenation
Batch hydrogenation under atmospheric conditions
The original and still a commonly practised form of hydrogenation in teaching laboratories, this process is usually effected by adding solid catalyst to a round bottom flask of dissolved reactant which has been evacuated using nitrogen or argon gas and sealing the mixture with a penetrable rubber seal. Hydrogen gas is then supplied from a H2-filled balloon. The resulting three phase mixture is agitated to promote mixing. Hydrogen uptake can be monitored, which can be useful for monitoring progress of a hydrogenation. This is achieved by either using a graduated tube containing a coloured liquid, usually aqueous copper sulfate or with gauges for each reaction vessel.
Batch hydrogenation at elevated temperature and/or pressure
Since many hydrogenation reactions such as hydrogenolysis of protecting groups and the reduction of aromatic systems proceed extremely sluggishly at atmospheric temperature and pressure, pressurised systems are popular. In these cases, catalyst is added to a solution of reactant under an inert atmosphere in a pressure vessel. Hydrogen is added directly from a cylinder or built in laboratory hydrogen source, and the pressurized slurry is mechanically rocked to provide agitation, or a spinning basket is used. Recent advances in electrolysis technology have led to the development of high pressure hydrogen generators, which generate hydrogen up to 1,400 psi (100 bar) from water. Heat may also be used, as the pressure compensates for the associated reduction in gas solubility.
Flow hydrogenation
Flow hydrogenation has become a popular technique at the bench and increasingly the process scale. This technique involves continuously flowing a dilute stream of dissolved reactant over a fixed bed catalyst in the presence of hydrogen. Using established high-performance liquid chromatography technology, this technique allows the application of pressures from atmospheric to . Elevated temperatures may also be used. At the bench scale, systems use a range of pre-packed catalysts which eliminates the need for weighing and filtering pyrophoric catalysts.
Industrial reactors
Catalytic hydrogenation is done in a tubular plug-flow reactor packed with a supported catalyst. The pressures and temperatures are typically high, although this depends on the catalyst. Catalyst loading is typically much lower than in laboratory batch hydrogenation, and various promoters are added to the metal, or mixed metals are used, to improve activity, selectivity and catalyst stability. The use of nickel is common despite its low activity, due to its low cost compared to precious metals.
Gas liquid induction reactors (hydrogenator) are also used for carrying out catalytic hydrogenation.
| Physical sciences | Organic reactions | Chemistry |
235980 | https://en.wikipedia.org/wiki/Sarcopterygii | Sarcopterygii | Sarcopterygii (; ) — sometimes considered synonymous with Crossopterygii () — is a clade (traditionally a class or subclass) of vertebrate animals which includes a group of bony fish commonly referred to as lobe-finned fish. These vertebrates are characterised by prominent muscular limb buds (lobes) within their fins, which are supported by articulated appendicular skeletons. This is in contrast to the other clade of bony fish, the Actinopterygii, which have only skin-covered bony spines supporting the fins.
The tetrapods, a mostly terrestrial superclass of vertebrates, are now recognized as having evolved from sarcopterygian ancestors and are most closely related to lungfishes. Their paired pectoral and pelvic fins evolved into limbs, and their foregut diverticulum eventually evolved into air-breathing lungs. Cladistically, this would make the tetrapods a subgroup within Sarcopterygii and thus sarcopterygians themselves. As a result, the phrase "lobe-finned fish" normally refers to not the entire clade but only aquatic members that are not tetrapods, i.e. a paraphyletic group.
Non-tetrapod sarcopterygians were once the dominant predators of freshwater ecosystems during the Carboniferous and Permian periods, but suffered significant decline after the Great Dying. The only known extant non-tetrapod sarcopterygians are the two species of coelacanths and six species of lungfishes.
Characteristics
Early lobe-finned fishes are bony fish with fleshy, lobed, paired fins, which are joined to the body by a single bone. The fins of lobe-finned fishes differ from those of all other fish in that each is borne on a fleshy, lobelike, scaly stalk extending from the body that resembles a limb bud. The scales of sarcopterygians are true scaloids, consisting of lamellar bone surrounded by layers of vascular bone, cosmine (similar to dentin), and external keratin. The physical structure of tetrapodomorphs, fish bearing resemblance to tetrapods, provides valuable insights into the evolutionary shift from aquatic to terrestrial existence. Pectoral and pelvic fins have articulations resembling those of tetrapod limbs. The first tetrapod land vertebrates, basal amphibian organisms, possessed legs derived from these fins. Sarcopterygians also possess two dorsal fins with separate bases, as opposed to the single dorsal fin in ray-finned fish. The braincase of sarcopterygians primitively has a hinge line, but this is lost in tetrapods and lungfish. Early sarcopterygians commonly exhibit a symmetrical tail, while all sarcopterygians possess teeth that are coated with genuine enamel.
Most species of lobe-finned fishes are extinct. The largest known lobe-finned fish was Rhizodus hibberti from the Carboniferous period of Scotland which may have exceeded 7 meters in length. Among the two groups of living species, the coelacanths and the lungfishes, the largest species is the West Indian Ocean coelacanth, reaching in length and weighing up . The largest lungfish is the marbled lungfish which can reach 2 m (6.6 ft) in length and weigh up to .
Classification
Taxonomists who adhere to the cladistic approach include Tetrapoda within this classification, encompassing all species of vertebrates with four limbs. The fin-limbs found in lobe-finned fishes like the coelacanths display a strong resemblance to the presumed ancestral form of tetrapod limbs. Lobe-finned fishes seemingly underwent two distinct evolutionary paths, leading to their classification into two subclasses: the Rhipidistia (comprising the Dipnoi, or lungfish, and the Tetrapodomorpha, which includes the Tetrapoda) and the Actinistia (represented by coelacanths).
Taxonomy
The classification below follows Benton (2004), and uses a synthesis of rank-based Linnaean taxonomy and also reflects evolutionary relationships. Benton included the superclass Tetrapoda in the subclass Sarcopterygii in order to reflect the direct descent of tetrapods from lobe-finned fish, despite the former being assigned a higher taxonomic rank.
Evolution
Lobe-finned fishes and their sister group, the ray-finned fishes, make up the superclass Osteichthyes, characterized by the presence of swim bladders (which share ancestry with lungs) as well as the evolution of ossified endoskeleton instead of cartilages like the skeletons of acanthodians, chondrichthyians and most placoderms. There are otherwise vast differences in fin, respiratory and circulatory structures between the Sarcopterygii and the Actinopterygii, such as the presence of cosmoid layers in the scales of sarcopterygians. The earliest sarcopterygian fossils were found in the uppermost Silurian, about 418 Ma. They closely resembled the acanthodians (the "spiny fish", a taxon that became extinct at the end of the Paleozoic). In the early–middle Devonian (416–385 Ma), while the predatory placoderms dominated the seas, some sarcopterygians came into freshwater habitats.
In the Early Devonian (416–397 Ma), the sarcopterygians, or lobe-finned fishes, split into two main lineages: the coelacanths and the rhipidistians. Coelacanths never left the oceans and their heyday was the late Devonian and Carboniferous, from 385 to 299 Ma, as they were more common during those periods than in any other period in the Phanerozoic.
Actinistians, a group within the lobe-finned fish, have been around for almost 380 million years. Over time, researchers have identified 121 species spread across 47 genera. Some species are well-documented in their evolutionary placement, while others are harder to track. The greatest boom in actinistian diversity happened during the Early Triassic, just after the Great Dying.
Coelacanths of the genus Latimeria still live today in the open oceans and retained many primordial features of ancient sarcopterygians, earning them a reputation as living fossils.
The rhipidistians, whose ancestors probably lived in the oceans near river mouths and estuaries, left the marine world and migrated into freshwater habitats. They then split into two major groups: the lungfish and the tetrapodomorphs, and both of them evolved their swim bladders into air-breathing lungs. Lungfish radiated into their greatest diversity during the Triassic period; today, fewer than a dozen genera remain, having evolved the first proto-lungs and proto-limbs, adapting to living outside a submerged water environment by the middle Devonian (397–385 Ma). The tetrapodomorphs, on the other hand, evolved into the fully-limbed stegocephalians and later the fully terrestrial tetrapods during the Late Devonian, when the Late Devonian Extinction bottlenecked and selected against the more aquatically adapted groups among stem-tetrapods. The surviving tetrapods then underwent adaptive radiation on dry land and become the dominant terrestrial animals during the Carboniferous and the Permian periods.
Hypotheses for means of pre-adaptation
There are three major hypotheses as to how lungfish evolved their stubby fins (proto-limbs).
Shrinking waterhole The first, traditional explanation is the "shrinking waterhole hypothesis", or "desert hypothesis", posited by the American paleontologist Alfred Romer, who believed that limbs and lungs may have evolved from the necessity of having to find new bodies of water as old waterholes dried up.
Inter-tidal adaptation Niedźwiedzki, Szrek, Narkiewicz, et al. (2010) proposed a second, the "inter-tidal hypothesis": That sarcopterygians may have first emerged unto land from intertidal zones rather than inland bodies of water, based on the discovery of the 395 million-year-old Zachełmie tracks, the oldest discovered fossil evidence of tetrapods.
Woodland swamp adaptation Retallack (2011) proposed a third hypothesis is dubbed the "woodland hypothesis": Retallack argues that limbs may have developed in shallow bodies of water, in woodlands, as a means of navigating in environments filled with roots and vegetation. He based his conclusions on the evidence that transitional tetrapod fossils are consistently found in habitats that were formerly humid and wooded floodplains.
Habitual escape onto land A fourth, minority hypothesis posits that advancing onto land achieved more safety from predators, less competition for prey, and certain environmental advantages not found in water—such as oxygen concentration, and temperature control—implying that organisms developing limbs were also adapting to spending some of their time out of water. However, studies have found that sarcopterygians developed tetrapod-like limbs suitable for walking well before venturing onto land. This suggests they adapted to walking on the ground-bed under water before they advanced onto dry land.
History through to the end-Permian extinction
The first tetrapodomorphs, which included the gigantic rhizodonts, had the same general anatomy as the lungfish, who were their closest kin, but they appear not to have left their water habitat until the late Devonian epoch (385–359 Ma), with the appearance of tetrapods (four-legged vertebrates). Tetrapods and megalichthyids are the only tetrapodomorphs which survived after the Devonian, with the latter group disappearing during the Permian.
Non-tetrapod sarcopterygians continued until towards the end of Paleozoic era, suffering heavy losses during the Permian–Triassic extinction event (251 Ma).
Phylogeny
The cladogram presented below is based on studies compiled by Janvier et al. (1997) for the Tree of Life Web Project, Mikko's Phylogeny Archive and Swartz (2012).
Sarcopterygii incertae sedis
†Guiyu oneiros Zhu et al., 2009
†Diabolepis speratus (Chang & Yu, 1984)
†Langdenia campylognatha Janvier & Phuong, 1999
†Ligulalepis Schultze, 1968
†Meemannia eos Zhu, Yu, Wang, Zhao & Jia, 2006
†Psarolepis romeri Yu 1998 sensu Zhu, Yu, Wang, Zhao & Jia, 2006
†Megamastax ambylodus Choo, Zhu, Zhao, Jia, & Zhu, 2014
†Sparalepis tingi Choo, Zhu, Qu, Yu, Jia & Zhaoh, 2017
paraphyletic Osteolepida incertae sedis|
†Bogdanovia orientalis Obrucheva 1955 [has been treated as Coelacanthinimorph sarcopterygian]
†Canningius groenlandicus Säve-Söderbergh, 1937
†Chrysolepis
†Geiserolepis
†Latvius
†L. grewingki (Gross, 1933)
†L. porosus Jarvik, 1948
†L. obrutus Vorobyeva, 1977
†Lohsania utahensis Vaughn, 1962
†Megadonichthys kurikae Vorobyeva, 1962
†Platyethmoidia antarctica Young, Long & Ritchie, 1992
†Shirolepis ananjevi Vorobeva, 1977
†Sterropterygion brandei Thomson, 1972
†Thaumatolepis edelsteini Obruchev, 1941
†Thysanolepis micans Vorobyeva, 1977
†Vorobjevaia dolonodon Young, Long & Ritchie, 1992
paraphyletic Elpistostegalia/Panderichthyida incertae sedis
†Parapanderichthys stolbovi (Vorobyeva, 1960) Vorobyeva, 1992
†Howittichthys warrenae Long & Holland, 2008
†Livoniana multidentata Ahlberg, Luksevic & Mark-Kurik, 2000
Stegocephalia incertae sedis
†Antlerpeton clarkii Thomson, Shubin & Poole, 1998
†Austrobrachyops jenseni Colbert & Cosgriff, 1974
†Broilisaurus raniceps (Goldenberg, 1873) Kuhn, 1938
†Densignathus rowei Daeschler, 2000
†Doragnathus woodi Smithson, 1980
†Jakubsonia livnensis Lebedev, 2004
†Limnerpeton dubium Fritsch, 1901 (nomen dubium)
†Limnosceloides Romer, 1952
†L. dunkardensis Romer, 1952 (Type)
†L. brahycoles Langston, 1966
†Occidens portlocki Clack & Ahlberg, 2004
†Ossinodus puerorum emend Warren & Turner, 2004
†Romeriscus periallus Baird & Carroll, 1968
†Sigournea multidentata Bolt & Lombard, 2006
†Sinostega pani Zhu et al., 2002
†Ymeria denticulata Clack et al., 2012
| Biology and health sciences | Fishes | null |
235988 | https://en.wikipedia.org/wiki/Bichir | Bichir | Bichirs and the reedfish comprise Polypteridae , a family of archaic ray-finned fishes and the only family in the order Polypteriformes .
All the species occur in freshwater habitats in tropical Africa and the Nile River system, mainly swampy, shallow floodplains and estuaries.
Cladistia, polypterids and their fossil relatives, are considered the sister group to all other extant ray-finned fishes (Actinopteri). They likely diverged from Actinopteri at least 330 million years ago. A closely related group, the Scanilepiformes, are known from the later Permian to the Triassic, and are likely ancestral to polypterids. The oldest polypterids are around 100 million years old, from the early Late Cretaceous of South America and Africa.
Anatomy
Polypterids are elongated fish with a unique series of dorsal finlets which vary in number from seven to 18, instead of a single dorsal fin. Each of the dorsal finlets has bifid (double-edged) tips, and are the only fins with spines; the rest of the fins are composed of soft rays. The body is covered in thick, bonelike, and rhombic (ganoid) scales. Their jaw structure more closely resembles that of the tetrapods than that of the teleost fishes. Bichirs have a number of other primitive characteristics, including fleshy pectoral fins superficially similar to those of lobe-finned fishes. They also have a pair of slit-like spiracles on the top of their heads that are used to breathe air, two gular plates, and paired ventral lungs. Both lungs are unchambered sacs. The larger right lung reaches the whole length of the body cavity, while the smaller left lung extends to the stomach. A slit-like opening called the glottis located on the ventral side of the oesophagus leads to the right lung, and a separate opening on the right lung leads to the left lung. Four pairs of gill arches are present.
Polypterids have a maximum body length ranging from to over depending on specific species and morphology.
Diet and traits
Polypterids are nocturnal and feed on small vertebrates, crustaceans, and insects. Their common aquarium diet includes bloodworms (Chironomidae larvae). Polypterids are known to have extraordinary olfactory ability. Polypterid reproduction consists of the female laying anywhere from 100 to 300 eggs over the span of a few days, and subsequent fertilization by the male.
Air breathing
Polypterids possess paired lungs which connect to the esophagus via a glottis. They are facultative air-breathers, accessing surface air to breathe when the water they inhabit is poorly oxygenated. Their lungs are highly vascularized to facilitate gas exchange. Deoxygenated arterial blood is brought to the lungs by paired pulmonary arteries, which branch from the fourth efferent branchial arteries (artery from the fourth gill arch), and oxygenated blood leaves the lungs in pulmonary veins. Unlike most lungfish and tetrapods, their lungs are smooth sacs instead of alveolated tissue. Polypterids are unique in that they breathe using recoil aspiration. Polypterids appear to prefer breathing air via their spiracles when undisturbed or in extremely shallow waters where they are unable to incline their body enough to breathe air through their mouth.
Polypterids as aquarium specimens
Polypterids are popular subjects of public and large hobby aquaria. They are sometimes called dragon bichir or dragon fin in pet shops for a more appealing name due to their dragon-like appearance. Though predatory, they are otherwise peaceful, preferring to lie on the bottom (they tend to swim when there are lots of large plants present), and make good tankmates with other species large enough to not be prey but small enough to not eat them. Some aquarists note that pleco catfish eat the slime coat off of polypterids. Polypterids in captivity have life expectancies of 10–30+ years. They do well in heavily planted tanks as it mimics their natural habitat.
Classification
In addition to the extinct genus Bawitius, the two living genera, Polypterus and Erpetoichthys, have 14 extant species:
Order Polypteriformes
Suborder Polypterioidei
Clade Salamandrophysida
Family Polypteridae
Genus †Bawitius Grandstaff et al. 2012
†Bawitius bartheli (Schaal 1984) Grandstaff et al. 2012 - Late Cretaceous (Cenomanian) of Egypt
Genus †Serenoichthys Dutheil 1999a
†Serenoichthys kemkemensis Dutheil 1999a
Genus Erpetoichthys J. A. Smith, 1865
Erpetoichthys calabaricus J. A. Smith, 1865 (reedfish)
Genus Polypterus Lacépède, 1803
†Polypterus dageti Gayet & Meunier 1996
†Polypterus faraou Otero et al., 2006 — late Miocene
†Polypterus sudanensis Werner & Gayet 1997
Retropinnis group
Polypterus retropinnis Vaillant, 1899 (West African bichir)
Bichir group
Polypterus ansorgii Boulenger, 1910 (Guinean bichir)
Polypterus bichir Lacépède, 1803 (Nile bichir)
P. b. bichir Lacepède, 1803
P. b. lapradei Steindachner, 1869
P. b. ornatus Arambourg 1948
Polypterus congicus Boulenger, 1898 (Congo bichir)
Polypterus endlicherii Heckel, 1847 (saddled bichir)
Weeksii group
Polypterus mokelembembe Schliewen & Schäfer, 2006 (Mokèlé-mbèmbé bichir)
Polypterus ornatipinnis Boulenger, 1902 (ornate bichir)
Polypterus weeksii Boulenger, 1898 (mottled bichir)
Senegalus group
Polypterus delhezi Boulenger, 1899 (barred bichir)
Polypterus polli J. P. Gosse, 1988
Polypterus palmas Ayres, 1850 (shortfin bichir)
P. p. buettikoferi Steindachner, 1891
P. p. palmas Ayres, 1850
Polypterus senegalus Cuvier, 1829 (gray bichir)
P. s. meridionalis Poll, 1941 (most likely a variant of P. s. senegalus)
P. s. senegalus Cuvier, 1829
Polypterus teugelsi Britz, 2004 (Teugelsi bichir)
| Biology and health sciences | Chondrosteans | null |
236020 | https://en.wikipedia.org/wiki/Elliptic%20geometry | Elliptic geometry | Elliptic geometry is an example of a geometry in which Euclid's parallel postulate does not hold. Instead, as in spherical geometry, there are no parallel lines since any two lines must intersect. However, unlike in spherical geometry, two lines are usually assumed to intersect at a single point (rather than two). Because of this, the elliptic geometry described in this article is sometimes referred to as single elliptic geometry whereas spherical geometry is sometimes referred to as double elliptic geometry.
The appearance of this geometry in the nineteenth century stimulated the development of non-Euclidean geometry generally, including hyperbolic geometry.
Elliptic geometry has a variety of properties that differ from those of classical Euclidean plane geometry. For example, the sum of the interior angles of any triangle is always greater than 180°.
Definitions
Elliptic geometry may be derived from spherical geometry by identifying antipodal points of the sphere to a single elliptic point. The elliptic lines correspond to great circles reduced by the identification of antipodal points. As any two great circles intersect, there are no parallel lines in elliptic geometry.
In elliptic geometry, two lines perpendicular to a given line must intersect. In fact, all perpendiculars to a given line intersect at a single point called the absolute pole of that line.
Every point corresponds to an absolute polar line of which it is the absolute pole. Any point on this polar line forms an absolute conjugate pair with the pole. Such a pair of points is orthogonal, and the distance between them is a quadrant.
The distance between a pair of points is proportional to the angle between their absolute polars.
As explained by H. S. M. Coxeter:
The name "elliptic" is possibly misleading. It does not imply any direct connection with the curve called an ellipse, but only a rather far-fetched analogy. A central conic is called an ellipse or a hyperbola according as it has no asymptote or two asymptotes. Analogously, a non-Euclidean plane is said to be elliptic or hyperbolic according as each of its lines contains no point at infinity or two points at infinity.
Two dimensions
Elliptic plane
The elliptic plane is the real projective plane provided with a metric. Kepler and Desargues used the gnomonic projection to relate a plane σ to points on a hemisphere tangent to it. With O the center of the hemisphere, a point P in σ determines a line OP intersecting the hemisphere, and any line L ⊂ σ determines a plane OL which intersects the hemisphere in half of a great circle. The hemisphere is bounded by a plane through O and parallel to σ. No ordinary line of σ corresponds to this plane; instead a line at infinity is appended to σ. As any line in this extension of σ corresponds to a plane through O, and since any pair of such planes intersects in a line through O, one can conclude that any pair of lines in the extension intersect: the point of intersection lies where the plane intersection meets σ or the line at infinity. Thus the axiom of projective geometry, requiring all pairs of lines in a plane to intersect, is confirmed.
Given P and Q in σ, the elliptic distance between them is the measure of the angle POQ, usually taken in radians. Arthur Cayley initiated the study of elliptic geometry when he wrote "On the definition of distance". This venture into abstraction in geometry was followed by Felix Klein and Bernhard Riemann leading to non-Euclidean geometry and Riemannian geometry.
Comparison with Euclidean geometry
In Euclidean geometry, a figure can be scaled up or scaled down indefinitely, and the resulting figures are similar, i.e., they have the same angles and the same internal proportions. In elliptic geometry, this is not the case. For example, in the spherical model we can see that the distance between any two points must be strictly less than half the circumference of the sphere (because antipodal points are identified). A line segment therefore cannot be scaled up indefinitely.
A great deal of Euclidean geometry carries over directly to elliptic geometry. For example, the first and fourth of Euclid's postulates, that there is a unique line between any two points and that all right angles are equal, hold in elliptic geometry. Postulate 3, that one can construct a circle with any given center and radius, fails if "any radius" is taken to mean "any real number", but holds if it is taken to mean "the length of any given line segment". Therefore any result in Euclidean geometry that follows from these three postulates will hold in elliptic geometry, such as proposition 1 from book I of the Elements, which states that given any line segment, an equilateral triangle can be constructed with the segment as its base.
Elliptic geometry is also like Euclidean geometry in that space is continuous, homogeneous, isotropic, and without boundaries. Isotropy is guaranteed by the fourth postulate, that all right angles are equal. For an example of homogeneity, note that Euclid's proposition I.1 implies that the same equilateral triangle can be constructed at any location, not just in locations that are special in some way. The lack of boundaries follows from the second postulate, extensibility of a line segment.
One way in which elliptic geometry differs from Euclidean geometry is that the sum of the interior angles of a triangle is greater than 180 degrees. In the spherical model, for example, a triangle can be constructed with vertices at the locations where the three positive Cartesian coordinate axes intersect the sphere, and all three of its internal angles are 90 degrees, summing to 270 degrees. For sufficiently small triangles, the excess over 180 degrees can be made arbitrarily small.
The Pythagorean theorem fails in elliptic geometry. In the 90°–90°–90° triangle described above, all three sides have the same length, and consequently do not satisfy . The Pythagorean result is recovered in the limit of small triangles.
The ratio of a circle's circumference to its area is smaller than in Euclidean geometry. In general, area and volume do not scale as the second and third powers of linear dimensions.
Elliptic space (the 3D case)
Note: This section uses the term "elliptic space" to refer specifically to 3-dimensional elliptic geometry. This is in contrast to the previous section, which was about 2-dimensional elliptic geometry. The quaternions are used to elucidate this space.
Elliptic space can be constructed in a way similar to the construction of three-dimensional vector space: with equivalence classes. One uses directed arcs on great circles of the sphere. As directed line segments are equipollent when they are parallel, of the same length, and similarly oriented, so directed arcs found on great circles are equipollent when they are of the same length, orientation, and great circle. These relations of equipollence produce 3D vector space and elliptic space, respectively.
Access to elliptic space structure is provided through the vector algebra of William Rowan Hamilton: he envisioned a sphere as a domain of square roots of minus one. Then Euler's formula (where r is on the sphere) represents the great circle in the plane containing 1 and r. Opposite points r and –r correspond to oppositely directed circles. An arc between θ and φ is equipollent with one between 0 and φ – θ. In elliptic space, arc length is less than π, so arcs may be parametrized with θ in [0, π) or (–π/2, π/2].
For It is said that the modulus or norm of z is one (Hamilton called it the tensor of z). But since r ranges over a sphere in 3-space, exp(θ r) ranges over a sphere in 4-space, now called the 3-sphere, as its surface has three dimensions. Hamilton called his algebra quaternions and it quickly became a useful and celebrated tool of mathematics. Its space of four dimensions is evolved in polar co-ordinates with t in the positive real numbers.
When doing trigonometry on Earth or the celestial sphere, the sides of the triangles are great circle arcs. The first success of quaternions was a rendering of spherical trigonometry to algebra. Hamilton called a quaternion of norm one a versor, and these are the points of elliptic space.
With fixed, the versors
form an elliptic line. The distance from to 1 is . For an arbitrary versor , the distance will be that θ for which since this is the formula for the scalar part of any quaternion.
An elliptic motion is described by the quaternion mapping
where and are fixed versors.
Distances between points are the same as between image points of an elliptic motion. In the case that and are quaternion conjugates of one another, the motion is a spatial rotation, and their vector part is the axis of rotation. In the case the elliptic motion is called a right Clifford translation, or a parataxy. The case corresponds to left Clifford translation.
Elliptic lines through versor may be of the form
or for a fixed .
They are the right and left Clifford translations of along an elliptic line through 1.
The elliptic space is formed from by identifying antipodal points.
Elliptic space has special structures called Clifford parallels and Clifford surfaces.
The versor points of elliptic space are mapped by the Cayley transform to for an alternative representation of the space.
Higher-dimensional spaces
Hyperspherical model
The hyperspherical model is the generalization of the spherical model to higher dimensions. The points of n-dimensional elliptic space are the pairs of unit vectors in Rn+1, that is, pairs of antipodal points on the surface of the unit ball in -dimensional space (the n-dimensional hypersphere). Lines in this model are great circles, i.e., intersections of the hypersphere with flat hypersurfaces of dimension n passing through the origin.
Projective elliptic geometry
In the projective model of elliptic geometry, the points of n-dimensional real projective space are used as points of the model. This models an abstract elliptic geometry that is also known as projective geometry.
The points of n-dimensional projective space can be identified with lines through the origin in -dimensional space, and can be represented non-uniquely by nonzero vectors in Rn+1, with the understanding that and , for any non-zero scalar , represent the same point. Distance is defined using the metric
that is, the distance between two points is the angle between their corresponding lines in Rn+1. The distance formula is homogeneous in each variable, with if and are non-zero scalars, so it does define a distance on the points of projective space.
A notable property of the projective elliptic geometry is that for even dimensions, such as the plane, the geometry is non-orientable. It erases the distinction between clockwise and counterclockwise rotation by identifying them.
Stereographic model
A model representing the same space as the hyperspherical model can be obtained by means of stereographic projection. Let En represent that is, -dimensional real space extended by a single point at infinity. We may define a metric, the chordal metric, on
En by
where and are any two vectors in Rn and is the usual Euclidean norm. We also define
The result is a metric space on En, which represents the distance along a chord of the corresponding points on the hyperspherical model, to which it maps bijectively by stereographic projection. We obtain a model of spherical geometry if we use the metric
Elliptic geometry is obtained from this by identifying the antipodal points and , and taking the distance from to this pair to be the minimum of the distances from to each of these two points.
Self-consistency
Because spherical elliptic geometry can be modeled as, for example, a spherical subspace of a Euclidean space, it follows that if Euclidean geometry is self-consistent, so is spherical elliptic geometry. Therefore it is not possible to prove the parallel postulate based on the other four postulates of Euclidean geometry.
Tarski proved that elementary Euclidean geometry is complete: there is an algorithm which, for every proposition, can show it to be either true or false. (This does not violate Gödel's theorem, because Euclidean geometry cannot describe a sufficient amount of arithmetic for the theorem to apply.) It therefore follows that elementary elliptic geometry is also self-consistent and complete.
| Mathematics | Non-Euclidean geometry | null |
236445 | https://en.wikipedia.org/wiki/Nucleophilic%20substitution | Nucleophilic substitution | In chemistry, a nucleophilic substitution (SN) is a class of chemical reactions in which an electron-rich chemical species (known as a nucleophile) replaces a functional group within another electron-deficient molecule (known as the electrophile). The molecule that contains the electrophile and the leaving functional group is called the substrate.<ref>R. A. Rossi, R. H. de Rossi, Aromatic Substitution by the SRN1 Mechanism, ACS Monograph Series No. 178, American Chemical Society, 1983. .</ref>
The most general form of the reaction may be given as the following:
The electron pair (:) from the nucleophile (Nuc) attacks the substrate () and bonds with it. Simultaneously, the leaving group (LG) departs with an electron pair. The principal product in this case is . The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged.
An example of nucleophilic substitution is the hydrolysis of an alkyl bromide, R-Br under basic conditions, where the attacking nucleophile is hydroxyl () and the leaving group is bromide ().
OH- + R-Br -> R-OH + Br-
Nucleophilic substitution reactions are common in organic chemistry. Nucleophiles often attack a saturated aliphatic carbon. Less often, they may attack an aromatic or unsaturated carbon.
Saturated carbon centres
SN1 and SN2 reactions
In 1935, Edward D. Hughes and Sir Christopher Ingold studied nucleophilic substitution reactions of alkyl halides and related compounds. They proposed that there were two main mechanisms at work, both of them competing with each other. The two main mechanisms were the SN1 reaction and the SN2 reaction, where S stands for substitution, N stands for nucleophilic, and the number represents the kinetic order of the reaction.
In the SN2 reaction, the addition of the nucleophile and the elimination of leaving group take place simultaneously (i.e. a concerted reaction). SN2 occurs when the central carbon atom is easily accessible to the nucleophile.
In SN2 reactions, there are a few conditions that affect the rate of the reaction. First of all, the 2 in SN2 implies that there are two concentrations of substances that affect the rate of reaction: substrate (Sub) and nucleophile. The rate equation for this reaction would be Rate=k[Sub][Nuc]. For a SN2 reaction, an aprotic solvent is best, such as acetone, DMF, or DMSO. Aprotic solvents do not add protons (H+ ions) into solution; if protons were present in SN2 reactions, they would react with the nucleophile and severely limit the reaction rate. Since this reaction occurs in one step, steric effects drive the reaction speed. In the intermediate step, the nucleophile is 185 degrees from the leaving group and the stereochemistry is inverted as the nucleophile bonds to make the product. Also, because the intermediate is partially bonded to the nucleophile and leaving group, there is no time for the substrate to rearrange itself: the nucleophile will bond to the same carbon that the leaving group was attached to. A final factor that affects reaction rate is nucleophilicity; the nucleophile must attack an atom other than a hydrogen.
By contrast the SN1 reaction involves two steps. SN1 reactions tend to be important when the central carbon atom of the substrate is surrounded by bulky groups, both because such groups interfere sterically with the SN2 reaction (discussed above) and because a highly substituted carbon forms a stable carbocation.
Like SN2 reactions, there are quite a few factors that affect the reaction rate of SN1 reactions. Instead of having two concentrations that affect the reaction rate, there is only one, substrate. The rate equation for this would be Rate=k[Sub]. Since the rate of a reaction is only determined by its slowest step, the rate at which the leaving group "leaves" determines the speed of the reaction. This means that the better the leaving group, the faster the reaction rate. A general rule for what makes a good leaving group is the weaker the conjugate base, the better the leaving group. In this case, halogens are going to be the best leaving groups, while compounds such as amines, hydrogen, and alkanes are going to be quite poor leaving groups. As SN2 reactions were affected by sterics, SN1 reactions are determined by bulky groups attached to the carbocation. Since there is an intermediate that actually contains a positive charge, bulky groups attached are going to help stabilize the charge on the carbocation through resonance and distribution of charge. In this case, tertiary carbocation will react faster than a secondary which will react much faster than a primary. It is also due to this carbocation intermediate that the product does not have to have inversion. The nucleophile can attack from the top or the bottom and therefore create a racemic product. It is important to use a protic solvent, water and alcohols, since an aprotic solvent could attack the intermediate and cause unwanted product. It does not matter if the hydrogens from the protic solvent react with the nucleophile since the nucleophile is not involved in the rate determining step.
Reactions
There are many reactions in organic chemistry involving this type of mechanism. Common examples include:
Organic reductions with hydrides, for example
using (S2)
Hydrolysis reactions such as
(S2) or
(S1)
Williamson ether synthesis
(S2)
The Wenker synthesis, a ring-closing reaction of aminoalcohols.
The Finkelstein reaction, a halide exchange reaction. Phosphorus nucleophiles appear in the Perkow reaction and the Michaelis–Arbuzov reaction.
The Kolbe nitrile synthesis, the reaction of alkyl halides with cyanides.
Borderline mechanism
An example of a substitution reaction taking place by a so-called borderline mechanism as originally studied by Hughes and Ingold is the reaction of 1-phenylethyl chloride'' with sodium methoxide in methanol.
The reaction rate is found to the sum of S1 and S2 components with 61% (3,5 M, 70 °C) taking place by the latter.
Other mechanisms
Besides S1 and S2, other mechanisms are known, although they are less common. The Si mechanism is observed in reactions of thionyl chloride with alcohols, and it is similar to S1 except that the nucleophile is delivered from the same side as the leaving group.
Nucleophilic substitutions can be accompanied by an allylic rearrangement as seen in reactions such as the Ferrier rearrangement. This type of mechanism is called an S1' or S2' reaction (depending on the kinetics). With allylic halides or sulphonates, for example, the nucleophile may attack at the γ unsaturated carbon in place of the carbon bearing the leaving group. This may be seen in the reaction of 1-chloro-2-butene with sodium hydroxide to give a mixture of 2-buten-1-ol and 1-buten-3-ol:
CH3CH=CH-CH2-Cl -> CH3CH=CH-CH2-OH + CH3CH(OH)-CH=CH2
The Sn1CB mechanism appears in inorganic chemistry. Competing mechanisms exist.
In organometallic chemistry the nucleophilic abstraction reaction occurs with a nucleophilic substitution mechanism.
Unsaturated carbon centres
Nucleophilic substitution via the SN1 or SN2 mechanism does not generally occur with vinyl or aryl halides or related compounds. Under certain conditions nucleophilic substitutions may occur, via other mechanisms such as those described in the nucleophilic aromatic substitution article.
Substitution can occur at the carbonyl group, such as acyl chlorides and esters.
| Physical sciences | Organic reactions | Chemistry |
236449 | https://en.wikipedia.org/wiki/Sleep%20paralysis | Sleep paralysis | Sleep paralysis is a state, during waking up or falling asleep, in which a person is conscious but in a complete state of full-body paralysis. During an episode, the person may hallucinate (hear, feel, or see things that are not there), which often results in fear. Episodes generally last no more than a few minutes. It can recur multiple times or occur as a single episode.
The condition may occur in those who are otherwise healthy or those with narcolepsy, or it may run in families as a result of specific genetic changes. The condition can be triggered by sleep deprivation, psychological stress, or abnormal sleep cycles. The underlying mechanism is believed to involve a dysfunction in REM sleep. Diagnosis is based on a person's description. Other conditions that can present similarly include narcolepsy, atonic seizure, and hypokalemic periodic paralysis.
Treatment options for sleep paralysis have been poorly studied. It is recommended that people be reassured that the condition is common and generally not serious. Other efforts that may be tried include sleep hygiene, cognitive behavioral therapy, and antidepressants.
Between 8% and 50% of people experience sleep paralysis at some point during their lives. About 5% of people have regular episodes. Males and females are affected equally. Sleep paralysis has been described throughout history. It is believed to have played a role in the creation of stories about alien abduction and other paranormal events.
Symptoms and signs
The main symptom of sleep paralysis is being unable to move or speak during awakening.
Imagined sounds such as humming, hissing, static, zapping and buzzing noises are reported during sleep paralysis. Other sounds such as voices, whispers and roars are also experienced. It has also been known that one may feel pressure on their chest and intense pain in their head during an episode. These symptoms are usually accompanied by intense emotions such as fear and panic. People also have sensations of being dragged out of bed or of flying, numbness, and feelings of electric tingles or vibrations running through their body.
Sleep paralysis may include hallucinations, such as an intruding presence or dark figure in the room. These are commonly known as sleep paralysis demons. It may also include suffocating or the individual feeling a sense of terror, accompanied by a feeling of pressure on one's chest and difficulty breathing.
Pathophysiology
The pathophysiology of sleep paralysis has not been concretely identified, although there are several theories about its cause. The first of these stems from the understanding that sleep paralysis is a parasomnia resulting from dysfunctional overlap of the REM and waking stages of sleep. Polysomnographic studies found that individuals who experience sleep paralysis have shorter REM sleep latencies than normal along with shortened NREM and REM sleep cycles, and fragmentation of REM sleep. This study supports the observation that disturbance of regular sleeping patterns can precipitate an episode of sleep paralysis, because fragmentation of REM sleep commonly occurs when sleep patterns are disrupted and has now been seen in combination with sleep paralysis.
Another major theory is that the neural functions that regulate sleep are out of balance, causing different sleep states to overlap. In this case, cholinergic sleep "on" neural populations are hyperactivated and the serotonergic sleep "off" neural populations are under-activated. As a result, the cells capable of sending the signals that would allow for complete arousal from the sleep state, the serotonergic neural populations, have difficulty in overcoming the signals sent by the cells that keep the brain in the sleep state. During normal REM sleep, the threshold for a stimulus to cause arousal is greatly elevated. Under normal conditions, medial and vestibular nuclei, cortical, thalamic, and cerebellar centers coordinate things such as head and eye movement, and orientation in space.
In individuals reporting sleep paralysis, there is almost no blocking of exogenous stimuli, which means it is much easier for a stimulus to arouse the individual. The vestibular nuclei in particular has been identified as being closely related to dreaming during the REM stage of sleep. According to this hypothesis, vestibular-motor disorientation, unlike hallucinations, arise from completely endogenous sources of stimuli.
If the effects of sleep "on" neural populations cannot be counteracted, characteristics of REM sleep are retained upon awakening. Common consequences of sleep paralysis include headaches, muscle pains or weakness or paranoia. As the correlation with REM sleep suggests, the paralysis is not complete: use of EOG traces shows that eye movement is still possible during such episodes; however, the individual experiencing sleep paralysis is unable to speak.
Research has found a genetic component in sleep paralysis. The characteristic fragmentation of REM sleep, hypnopompic, and hypnagogic hallucinations have a heritable component in other parasomnias, which lends credence to the idea that sleep paralysis is also genetic. Twin studies have shown that if one twin of a monozygotic pair (identical twins) experiences sleep paralysis that other twin is very likely to experience it as well. The identification of a genetic component means that there is some sort of disruption of a function at the physiological level. Further studies must be conducted to determine whether there is a mistake in the signaling pathway for arousal as suggested by the first theory presented, or whether the regulation of melatonin or the neural populations themselves have been disrupted.
Hallucinations
Several types of hallucinations have been linked to sleep paralysis: the belief that there is an intruder in the room, the feeling of a presence, and the sensation of floating. One common hallucination is the presence of an incubus. A neurological hypothesis is that in sleep paralysis the cerebellum, which usually coordinates body movement and provides information on body position, experiences a brief myoclonic spike in brain activity inducing a floating sensation.
The intruder and incubus hallucinations highly correlate with one another, and moderately correlated with the third hallucination, vestibular-motor disorientation, also known as out-of-body experiences, which differ from the other two in not involving the threat-activated vigilance system.
Threat hyper-vigilance
A hyper-vigilant state created in the midbrain may further contribute to hallucinations. More specifically, the emergency response is activated in the brain when individuals wake up paralyzed and feel vulnerable to attack. This helplessness can intensify the effects of the threat response well above the level typical of normal dreams, which could explain why such visions during sleep paralysis are so vivid. The threat-activated vigilance system is a protective mechanism that differentiates between dangerous situations and determines whether the fear response is appropriate.
The hyper-vigilance response can lead to the creation of endogenous stimuli that contribute to the perceived threat. A similar process may explain hallucinations, with slight variations, in which an evil presence is perceived by the subject to be attempting to suffocate them, either by pressing heavily on the chest or by strangulation. A neurological explanation holds that this results from a combination of the threat vigilance activation system and the muscle paralysis associated with sleep paralysis that removes voluntary control of breathing. Several features of REM breathing patterns exacerbate the feeling of suffocation. These include shallow rapid breathing, hypercapnia, and slight blockage of the airway, which is a symptom prevalent in sleep apnea patients.
According to this account, the subjects attempt to breathe deeply and find themselves unable to do so, creating a sensation of resistance, which the threat-activated vigilance system interprets as an unearthly being sitting on their chest, threatening suffocation. The sensation of entrapment causes a feedback loop when the fear of suffocation increases as a result of continued helplessness, causing the subjects to struggle to end the SP episode.
Diagnosis
Sleep paralysis is mainly diagnosed via clinical interview and ruling out other potential sleep disorders that could account for the feelings of paralysis. Several measures are available to reliably diagnose or screen (Munich Parasomnia Screening) for recurrent isolated sleep paralysis.
Diagnosis
Episodes of sleep paralysis can occur in the context of several medical conditions (e.g., narcolepsy, hypokalemia). When episodes occur independent of these conditions or substance use, it is termed "isolated sleep paralysis" (ISP). When ISP episodes are more frequent and cause clinically significant distress or interference, it is classified as "recurrent isolated sleep paralysis" (RISP). Episodes of sleep paralysis, regardless of classification, are generally short (1–6 minutes), but longer episodes have been documented.
It can be difficult to differentiate between cataplexy brought on by narcolepsy and true sleep paralysis, because the two phenomena are physically indistinguishable. The best way to differentiate between the two is to note when the attacks occur most often. Narcolepsy attacks are more common when the individual is falling asleep; ISP and RISP attacks are more common upon awakening.
Differential diagnosis
Similar conditions include:
Exploding head syndrome (EHS) potentially frightening parasomnia, the hallucinations are usually briefer always loud or jarring and there is no paralysis during EHS.
Nightmare disorder (ND); also REM-based parasomnia
Sleep terrors (STs) potentially frightening parasomnia but are not REM based and there is a lack of awareness to surroundings, characteristic screams during STs.
Noctural panic attacks (NPAs) involves fear and acute distress but lacks paralysis and dream imagery
Post-traumatic stress disorder (PTSD) often includes scary imagery and anxiety but not limited to sleep-wake transitions
Prevention
Several circumstances have been identified that are associated with an increased risk of sleep paralysis. These include insomnia, sleep deprivation, an erratic sleep schedule, stress, and physical fatigue. It is also believed that there may be a genetic component in the development of RISP, because there is a high concurrent incidence of sleep paralysis in monozygotic twins. Sleeping in the supine position has been found an especially prominent instigator of sleep paralysis.
Sleeping in the supine position is believed to make the sleeper more vulnerable to episodes of sleep paralysis because in this sleeping position it is possible for the soft palate to collapse and obstruct the airway. This is a possibility regardless of whether the individual has been diagnosed with sleep apnea or not. There may also be a greater rate of microarousals while sleeping in the supine position because there is a greater amount of pressure being exerted on the lungs by gravity.
While many factors can increase the risk for ISP or RISP, they can be avoided with minor lifestyle changes.
Treatment
Medical treatment starts with education about sleep stages and the inability to move muscles during REM sleep. People should be evaluated for narcolepsy if symptoms persist. The safest treatment for sleep paralysis is for people to adopt healthier sleeping habits. However, in more serious cases tricyclic antidepressants or selective serotonin reuptake inhibitors (SSRIs) may be used. Despite the fact that these treatments are prescribed there is currently no drug that has been found to completely interrupt episodes of sleep paralysis a majority of the time.
Medications
Though no large trials have taken place which focus on the treatment of sleep paralysis, several drugs have promise in case studies. Two trials of GHB for people with narcolepsy demonstrated reductions in sleep paralysis episodes.
Pimavanserin has been proposed as a possible candidate for future studies in treating sleep paralysis.
Cognitive-behavior therapy
Some of the earliest work in treating sleep paralysis was done using a cognitive-behavior therapy called CA-CBT. The work focuses on psycho-education and modifying catastrophic cognitions about the sleep paralysis attack. This approach has previously been used to treat sleep paralysis in Egypt, although clinical trials are lacking.
The first published psychosocial treatment for recurrent isolated sleep paralysis was cognitive-behavior therapy for isolated sleep paralysis (CBT-ISP). It begins with self-monitoring of symptoms, cognitive restructuring of maladaptive thoughts relevant to ISP (e.g., "the paralysis will be permanent"), and psychoeducation about the nature of sleep paralysis. Prevention techniques include ISP-specific sleep hygiene and the preparatory use of various relaxation techniques (e.g. diaphragmatic breathing, mindfulness, progressive muscle relaxation, meditation). Episode disruption techniques are first practiced in session and then applied during actual attacks. No controlled trial of CBT-ISP has yet been conducted to prove its effectiveness.
Epidemiology
Sleep paralysis is experienced equally in males and females. Lifetime prevalence rates derived from 35 aggregated studies indicate that approximately 8% of the general population, 28% of students, and 32% of psychiatric patients experience at least one episode of sleep paralysis at some point in their lives. Rates of recurrent sleep paralysis are not as well known, but 15–45% of those with a lifetime history of sleep paralysis may meet diagnostic criteria for Recurrent Isolated Sleep Paralysis. In surveys from Canada, China, England, Japan and Nigeria, 20% to 60% of individuals reported having experienced sleep paralysis at least once in their lifetime. In general, non-whites appear to experience sleep paralysis at higher rates than whites, but the magnitude of the difference is rather small. Approximately 36% of the general population that experiences isolated sleep paralysis develop it between 25 and 44 years of age.
Isolated sleep paralysis is commonly seen in patients that have been diagnosed with narcolepsy. Approximately 30–50% of people that have been diagnosed with narcolepsy have experienced sleep paralysis as an auxiliary symptom. A majority of the individuals who have experienced sleep paralysis have sporadic episodes that occur once a month to once a year. Only 3% of individuals experiencing sleep paralysis that is not associated with a neuromuscular disorder have nightly episodes.
Society and culture
Etymology
The original definition of sleep paralysis was codified by Samuel Johnson in his A Dictionary of the English Language as nightmare, a term that evolved into the modern definition. The term was first used and dubbed by British neurologist, S.A.K. Wilson in his 1928 dissertation, The Narcolepsies. Such sleep paralysis was widely considered the work of demons, and more specifically incubi, which were thought to sit on the chests of sleepers. In Old English the name for these beings was mare or mære (from a proto-Germanic *marōn, cf. Old Norse mara), hence comes the mare in the word nightmare. The word might be cognate to Greek Marōn (in the Odyssey) and Sanskrit Māra.
Cultural significance and priming
Although the core features of sleep paralysis (e.g., atonia, a clear sensorium, and frequent hallucinations) appear to be universal, the ways in which they are experienced vary according to time, place, and culture. Over 100 terms have been identified for these experiences. Some scientists have proposed sleep paralysis as an explanation for reports of paranormal and spiritual phenomena such as ghosts, alien visits, demons or demonic possession, alien abduction experiences, the night hag and shadow people haunting.
According to some scientists, culture may be a major factor in shaping sleep paralysis. When sleep paralysis is interpreted through a particular cultural filter, it may take on greater salience. For example, if sleep paralysis is feared in a certain culture, this fear could lead to conditioned fear, and thus worsen the experience, in turn leading to higher rates. Consistent with this idea, high rates and long durations of immobility during sleep paralysis have been found in Egypt, where there are elaborate beliefs about sleep paralysis, involving malevolent spirit-like creatures, the jinn.
Research has found that sleep paralysis is associated with great fear and fear of impending death in 50% of sufferers in Egypt. A study comparing rates and characteristics of sleep paralysis in Egypt and Denmark found that the phenomenon is three times more common in Egypt than Denmark. In Denmark, unlike Egypt, there are no elaborate supernatural beliefs about sleep paralysis, and the experience is often interpreted as an odd physiological event, with overall shorter sleep paralysis episodes and fewer people (17%) fearing that they could die from it.
Folklore
The night hag is a generic name for a folkloric creature found in cultures around the world, and which is used to explain the phenomenon of sleep paralysis. A common description is that a person feels a presence of a supernatural malevolent being which immobilizes the person as if standing on the chest. This phenomenon goes by many names.
Albania
In Albanian folk beliefs, Mokthi is believed to be a male spirit with a golden fez hat who appears to women who are usually tired or suffering and stops them from moving. It is believed that if they can take his golden hat, he will grant them a wish, but then he will visit them frequently although he is harmless. There are talismans that can provide protection from Mokthi and one way is to put one's husband's hat near the pillow while sleeping. Mokthi or Makthi in Albanian means "Nightmare".
Bengal
In Bengali folklore, sleep paralysis is believed to be caused by a supernatural entity called Boba (). Boba attacks a person by strangling him when the person sleeps in a supine position. In Bengal, the phenomenon is called Bobay Dhora ().
Cambodia
Sleep paralysis among Cambodians is known as "the ghost pushes you down," and entails the belief in dangerous visitations from deceased relatives.
Egypt
In Egypt, sleep paralysis is conceptualized as a terrifying jinn attack.
Italy
In the different regions of Italy there are many examples of supernatural beings associated with sleep paralysis. In the regions of Marche and Abruzzo, it is referred to as a or attack; the Pandafeche usually refers to an evil witch, sometimes a ghostlike spirit or a terrifying catlike creature, that mounts on the chest of the victim and tries to harm him. The only way to avoid her is to keep a bag of sand or beans close to the bed, so that the witch will stop to count how many beans or sand-grains are inside it. A similar tradition is present in the Sardinian folklore, where the Ammuntadore is known as a creature that mounts on the people's chest during their sleep to give them nightmares, and that can change its shape according to the person's fears. In Northern Italy, specifically in the Tyrol area, the Trud is a witch that sits on the people's chest at night, making them unable to breathe; to chase her away, people should make the sign of the Cross, something that would need a great struggle in a situation of paralysis. A similar folklore is present in the Sannio area, around the city of Benevento, where the witch is called Janara. In Southern Italy, sleep paralysis is usually explained with the presence of a sprite standing on the people's chest: if the person manages to catch the sprite (or steal his hat), in exchange for his freedom (or to have his hat back) he can reveal the hiding place of a rich treasure; this sprite has different names in different regions of Italy: Monaciello in Campania, Monachicchio in Basilicata, Laurieddhu or Scazzamurill in Apulia, Mazzmuredd in Molise.
Newfoundland
In Newfoundland, sleep paralysis is referred to as the Old Hag, and victims of a hagging are said to be hag-ridden upon awakening. Victims report being completely conscious, but unable to speak or move, and report a person or an animal which sits upon their chest. Despite the name, the attacker can be either male or female. Some suggested cures or preventions for the Old Hag include sleeping with a Bible under the pillow, calling the sleeper's name backwards or in an extreme example, sleeping with a shingle or board embedded with nails strapped to the chest. This object was called a Hag Board. The Old Hag is well-enough known in the province to be a pop culture figure, appearing in films and plays as well as in crafted objects.
Nigeria
Nigeria has myriad interpretations of the cause of sleep paralysis, due to numerous cultures and belief systems that exist there.
United States
Sleep paralysis is sometimes interpreted as space alien abduction in the United States.
Literature
Various forms of magic and spiritual possession were also advanced as causes in literature. In nineteenth century Europe, the vagaries of diet were thought to be responsible. For example, in Charles Dickens's A Christmas Carol, Ebenezer Scrooge attributes the ghost he sees to "... an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato..." In a similar vein, the Household Cyclopedia (1881) offers the following advice about nightmares:
Great attention is to be paid to regularity and choice of diet. Intemperance of every kind is hurtful, but nothing is more productive of this disease than drinking bad wine. Of eatables those which are most prejudicial are all fat and greasy meats and pastry... Moderate exercise contributes in a superior degree to promote the digestion of food and prevent flatulence; those, however, who are necessarily confined to a sedentary occupation, should particularly avoid applying themselves to study or bodily labor immediately after eating... Going to bed before the usual hour is a frequent cause of night-mare, as it either occasions the patient to sleep too long or to lie long awake in the night. Passing a whole night or part of a night without rest likewise gives birth to the disease, as it occasions the patient, on the succeeding night, to sleep too soundly. Indulging in sleep too late in the morning, is an almost certain method to bring on the paroxysm, and the more frequently it returns, the greater strength it acquires; the propensity to sleep at this time is almost irresistible.
J. M. Barrie, the author of the Peter Pan stories, may have had sleep paralysis. He said of himself "In my early boyhood it was a sheet that tried to choke me in the night." He also described several incidents in the Peter Pan stories that indicate that he was familiar with an awareness of a loss of muscle tone whilst in a dream-like state. For example, Maimie is asleep but calls out "What was that....It is coming nearer! It is feeling your bed with its horns-it is boring for [into] you", and when the Darling children were dreaming of flying, Barrie says "Nothing horrid was visible in the air, yet their progress had become slow and laboured, exactly as if they were pushing their way through hostile forces. Sometimes they hung in the air until Peter had beaten on it with his fists." Barrie describes many parasomnias and neurological symptoms in his books and uses them to explore the nature of consciousness from an experiential point of view.
Documentary films
The Nightmare is a 2015 documentary that discusses the causes of sleep paralysis as seen through extensive interviews with participants, and the experiences are re-enacted by professional actors. In synopsis, it proposes that such cultural phenomena as alien abduction, the near-death experience and shadow people can, in many cases, be attributed to sleep paralysis. The "real-life" horror film debuted at the Sundance Film Festival on January 26, 2015, and premiered in theatres on June 5, 2015.
| Biology and health sciences | Mental disorders | Health |
236466 | https://en.wikipedia.org/wiki/Airborne%20early%20warning%20and%20control | Airborne early warning and control | An airborne early warning and control (AEW&C) system is an airborne radar early warning system designed to detect aircraft, ships, vehicles, missiles and other incoming projectiles at long ranges, as well as performing command and control of the battlespace in aerial engagements by informing and directing friendly fighter and attack aircraft. AEW&C units are also used to carry out aerial surveillance over ground and maritime targets, and frequently perform battle management command and control (BMC2). When used at altitude, the radar system on AEW&C aircraft allows the operators to detect, track and prioritize targets and identify friendly aircraft from hostile ones in real-time and from much farther away than ground-based radars. Like ground-based radars, AEW&C systems can be detected and targeted by opposing forces, but due to aircraft mobility and extended sensor range, they are much less vulnerable to counter-attacks than ground systems.
AEW&C aircraft are used for both defensive and offensive air operations, and serve air forces in the same role as what the combat information center is to naval warships, in addition to being a highly mobile and powerful radar platform. So useful and advantageous is it to have such aircraft operating at a high altitude, that some navies also operate AEW&C aircraft for their warships at sea, either coastal- or carrier-based and on both fixed-wing and rotary-wing platforms. In the case of the United States Navy, the Northrop Grumman E-2 Hawkeye AEW&C aircraft is assigned to its supercarriers to protect them and augment their onboard command information centers (CICs). The designation "airborne early warning" (AEW) was used for earlier similar aircraft used in the less-demanding radar picket role, such as the Fairey Gannet AEW.3 and Lockheed EC-121 Warning Star, and continues to be used by the RAF for its Sentry AEW1, while AEW&C (airborne early warning and control) emphasizes the command and control capabilities that may not be present on smaller or simpler radar picket aircraft. AWACS (Airborne Warning and Control System) is the name of the specific system installed in the E-3 and Japanese Boeing E-767 AEW&C airframes, but is often used as a general synonym for AEW&C.
General characteristics
Modern AEW&C systems can detect aircraft from up to away, well out of range of most surface-to-air missiles. One AEW&C aircraft flying at can cover an area of . Three such aircraft in overlapping orbits can cover the whole of Central Europe. AEW&C system indicates close and far proximity range on threats and targets, help extend the range of their sensors, and make offensive aircraft harder to track by avoiding the need for them to keep their own radar active, which the enemy can detect. Systems also communicate with friendly aircraft, vectoring fighters towards hostile aircraft or any unidentified flying object.
History of development
After having developed Chain Home—the first ground-based early-warning radar detection system—in the 1930s, the British developed a radar set that could be carried on an aircraft for what they termed "Air Controlled Interception". The intention was to cover the North West approaches where German long range Focke-Wulf Fw 200 Condor aircraft were threatening shipping. A Vickers Wellington bomber (serial R1629) was fitted with a rotating antenna array. It was tested for use against aerial targets and then for possible use against German E boats. Another radar equipped Wellington with a different installation was used to direct Bristol Beaufighters toward Heinkel He 111s, which were air-launching V-1 flying bombs.
In February 1944, the US Navy ordered the development of a radar system that could be carried aloft in an aircraft under Project Cadillac. A prototype system was built and flown in August on a modified TBM Avenger torpedo bomber. Tests were successful, with the system being able to detect low flying formations at a range greater than . US Navy then ordered production of the TBM-3W, the first production AEW aircraft to enter service. TBM-3Ws fitted with the AN/APS-20 radar entered service in March 1945, with 27 eventually constructed. It was also recognised that a larger land-based aircraft would be attractive, thus, under the Cadillac II program, multiple Boeing B-17G Flying Fortress bombers were also outfitted with the same radar.
The Lockheed WV and EC-121 Warning Star, which first flew in 1949, served widely with US Air Force and US Navy. It provided the main AEW coverage for US forces during the Vietnam war. It remained operational until replaced with the E-3 AWACS. Developed roughly in parallel, N-class blimps were also used as AEW aircraft, filling gaps in radar coverage for the continental US, their tremendous endurance of over 200 hours being a major asset in an AEW aircraft. Following a crash, the US Navy opted to discontinue lighter than air operations in 1962.
In 1958, the Soviet Tupolev Design Bureau was ordered to design an AEW aircraft. After determining that the projected radar instrumentation would not fit in a Tupolev Tu-95 or a Tupolev Tu-116, the decision was made to use the more capacious Tupolev Tu-114 instead. This solved the problems with cooling and operator space that existed with the narrower Tu-95 and Tu-116 fuselage. To meet range requirements, production examples were fitted with an air-to-air refueling probe. The resulting system, the Tupolev Tu-126, entered service in 1965 with the Soviet Air Forces and remained in service until replaced by the Beriev A-50 in 1984.
During the Cold war, United Kingdom deployed a substantial AEW capability, initially with American Douglas AD-4W Skyraiders, designated Skyraider AEW.1, which in turn were replaced by the Fairey Gannet AEW.3, using the same AN/APS-20 radar. With the retirement of conventional aircraft carriers, the Gannet was withdrawn and the Royal Air Force (RAF) installed the radars from the Gannets on Avro Shackleton MR.2 airframes, redesignated Shackleton AEW.2. To replace the Shackleton AEW.2, an AEW variant of the Hawker Siddeley Nimrod, known as the Nimrod AEW3, was ordered in 1974. After a protracted and problematic development, this was cancelled in 1986, and seven E-3Ds, designated Sentry AEW.1 in RAF service, were purchased instead.
Current systems
Many countries have developed their own AEW&C systems, although the Boeing E-3 Sentry, E-7A and Northrop Grumman E-2 Hawkeye and Gulfstream/IAI EL/W-2085 are the most common systems worldwide.
Airborne Warning and Control System (AWACS)
Boeing produces a specific system with a "rotodome" rotating radome that incorporates Westinghouse (now Northrop Grumman) radar. It is mounted on either the E-3 Sentry aircraft (Boeing 707) or more recently the Boeing E-767 (Boeing 767), the latter only being used by the Japan Air Self-Defense Force.
When AWACS first entered service it represented a major advance in capability, being the first AEW to use a pulse-Doppler radar, which allowed it to track targets normally lost in ground clutter. Previously, low-flying aircraft could only be readily tracked over water. The AWACS features a three-dimensional radar that measures azimuth, range, and elevation simultaneously; the unit installed upon the E-767 has superior surveillance capability over water compared to the AN/APY-1 system on the earlier E-3 models.
E-2 Hawkeye
The E-2 Hawkeye was a specially designed AEW aircraft. Upon its entry to service in 1965, it was initially plagued by technical issues, causing a (later reversed) cancellation. Procurement resumed after efforts to improve reliability, such as replacement of the original rotary drum computer used for processing radar information by a Litton L-304 digital computer. In addition to purchases by the US Navy, the E-2 Hawkeye has been sold to the armed forces of Egypt, France, Israel, Japan, Singapore and Taiwan.
The latest E-2 version is the E-2D Advanced Hawkeye, which features the new AN/APY-9 radar. The APY-9 radar has been speculated to be capable of detecting fighter-sized stealth aircraft, which are typically optimized against high frequencies like Ka, Ku, X, C and parts of the S-bands. Historically, UHF radars had resolution and detection issues that made them ineffective for accurate targeting and fire control; Northrop Grumman and Lockheed claim that the APY-9 has solved these shortcomings in the APY-9 using advanced electronic scanning and high digital computing power via space/time adaptive processing.
Beriev A-50
The Russian Aerospace Forces are currently using approximately 3-5 Beriev A-50 and A-50U "Shmel" in the AEW role. The "Mainstay" is based on the Ilyushin Il-76 airframe, with a large non-rotating disk radome on the rear fuselage. These replaced the 12 Tupolev Tu-126 that filled the role previously. The A-50 and A-50U will eventually be replaced by the Beriev A-100, which features an AESA array in the radome and is based on the updated Il-476.
KJ-2000
In May 1997, Russia and Israel agreed to jointly fulfill an order from China to develop and deliver an early warning system. China reportedly ordered one Phalcon for $250 million, which entailed retrofitting a Russian-made Ilyushin-76 cargo plane [also incorrectly reported as a Beriev A-50 Mainstay] with advanced Elta electronic, computer, radar and communications systems. Beijing was expected to acquire several Phalcon AEW systems, and reportedly could buy at least three more [and possibly up to eight] of these systems, the prototype of which was planned for testing beginning in 2000. In July 2000, the US pressured Israel to back out of the $1 billion agreement to sell China four Phalcon phased-array radar systems. Following the cancelled A-50I/Phalcon deal, China turned to indigenous solutions. The Phalcon radar and other electronic systems were taken off from the unfinished Il-76, and the airframe was handed to China via Russia in 2002. The Chinese AWACS has a unique phased array radar (PAR) carried in a round radome. Unlike the US AWACS aircraft, which rotate their rotodomes to give a 360 degree coverage, the radar antenna of the Chinese AWACS does not rotate. Instead, three PAR antenna modules are placed in a triangular configuration inside the round radome to provide a 360 degree coverage. The installation of equipment at the Il-76 began in late 2002 aircraft by Xian aircraft industries (Xian Aircraft Industry Co.). The first flight of an airplane KJ-2000 made in November 2003. All four machines will be equipped with this type. The last to be introduced into service the Chinese Air Force until the end of 2007. China is also developing a carrier-based AEW&C, Xian KJ-600 via Y-7 derived Xian JZY-01 testbed.
EL/W-2085 AEW&C
The EL/W-2085 is an airborne early warning and control (AEW&C) multi-band radar system developed by Israel Aerospace Industries (IAI) and its subsidiary Elta Systems of Israel. Its primary objective is to provide intelligence to maintain air superiority and conduct surveillance. The system is currently in service with Israel, Italy, and Singapore.
Instead of using a rotodome, a moving radar was found on some AEW&C aircraft, and the EL/W-2085 used an active electronically scanned array (AESA) – an active phased array radar. This radar consists of an array of transmit/receive (T/R) modules that allow a beam to be electronically steered, making a physically rotating rotodome unnecessary.
AESA radars operate on a pseudorandom set of frequencies and also have very short scanning rates, which makes them difficult to detect and jam. Up to 1000 targets can be tracked simultaneously to a range of 243 mi (450 km), while at the same time, multitudes of air-to-air interceptions or air-to-surface (including maritime) attacks can be guided simultaneously. The radar equipment of the Israeli AEW&C consists of each L-band radar on the left and right sides of the fuselage and each S-band antenna in the nose and tail. The phased array allows aircraft positions on operator screens to be updated every 2–4 seconds rather than every 10 seconds, as is the case on the rotodome AWACS.
ELTA was the first company to introduce an Active Electronically Scanned Array Airborne (AESA) Early Warning Aircraft and implement advanced mission aircraft using efficient, high-performance business jet platforms.
Netra AEW&CS
In 2003, the Indian Air Force (IAF) and Defence Research and Development Organisation (DRDO) began a study of requirements for developing an Airborne Early Warning and Control (AWAC) system. In 2015, DRDO delivered 3 AWACs, called Netra, to the IAF with an advanced Indian AESA radar system fitted on the Brazilian Embraer EMB-145 air frame. Netra gives a 240-degree coverage of airspace. The Emb-145 also has air-to-air refuelling capability for longer surveillance time. The IAF also operates three Israeli EL/W-2090 systems, mounted on Ilyushin Il-76 airframes, the first of which first arrived on 25 May 2009. The DRDO proposed a more advanced AWACS with a longer range and with a 360-degree coverage akin to the Phalcon system, based on the Airbus A330 airframe, but given the costs involved there is also the possibility of converting used A320 airliners as well.
IAF has plans to develop 6 more Netra AEW&CS based on Embraer EMB-145 platform and another 6 based on Airbus A321 platform. These systems are expected to have an enhanced performance including range and azimuth
Boeing 737 AEW&C
The Royal Australian Air Force, Republic of Korea Air Force and the Turkish Air Force are deploying Boeing 737 AEW&C aircraft. The Boeing 737 AEW&C has a fixed, active electronically scanned array radar antenna instead of a mechanically-rotating one, and is capable of simultaneous air and sea search, fighter control and area search, with a maximum range of over 600 km (look-up mode). In addition, the radar antenna array is also doubled as an ELINT array, with a maximum range of over 850 km at altitude.
Erieye/GlobalEye
The Swedish Air Force uses the S 100D Argus ASC890 as its AEW platform. The S 100D Argus is based on the Saab 340 with an Ericsson Erieye PS-890 radar. Saab also offers the Bombardier Global 6000-based GlobalEye. In early 2006, the Pakistan Air Force ordered six Erieye AEW equipped Saab 2000s from Sweden. In December 2006, the Pakistan Navy requested three excess P-3 Orion aircraft to be equipped with Hawkeye 2000 AEW systems. China and Pakistan also signed a memorandum of understanding (MoU) for the joint development of AEW&C systems.
The Hellenic Air Force, Brazilian Air Force and Mexican Air Force use the Embraer R-99 with an Ericsson Erieye PS-890 radar, as on the S 100D.
Others
Israel has developed the IAI/Elta EL/M-2075 Phalcon system, which uses an AESA (active electronically scanned array) in lieu of a rotodome antenna. The system was the first such system to enter service. The original Phalcon was mounted on a Boeing 707 and developed for the Israeli Defense Force and for export. Israel uses IAI EL/W-2085 airborne early warning and control multi-band radar system on Gulfstream G550; this platform is considered to be both more capable and less expensive to operate than the older Boeing 707-based Phalcon fleet.
Helicopter AEW systems
On 3 June 1957, the first of 2 HR2S-1W, a derivative of the Sikorsky CH-37 Mojave, was delivered to the US Navy, it used the AN/APS-32 but proved unreliable due to vibration.
The British Sea King ASaC7 naval helicopter was operated from both the s and later the helicopter carrier . The creation of Sea King ASaC7, and earlier AEW.2 and AEW.5 models, came as the consequence of lessons learnt by the Royal Navy during the 1982 Falklands War when the lack of AEW coverage for the task force was a major tactical handicap, and rendered them vulnerable to low-level attack. The Sea King was determined to be both more practical and responsive than the proposed alternative of relying on the RAF's land-based Shackleton AEW.2 fleet. The first examples were a pair of Sea King HAS2s that had the Thorn-EMI ARI 5980/3 Searchwater LAST radar attached to the fuselage on a swivel arm and protected by an inflatable dome. The improved Sea King ASaC7 featured the Searchwater 2000AEW radar, which was capable of simultaneously tracking up to 400 targets, instead of an earlier limit of 250 targets. The Spanish Navy fields the SH-3 Sea King in the same role, operated from the LPH .
The AgustaWestland EH-101A AEW of the Italian Navy is operated from the aircraft carriers and . During the 2010s, the Royal Navy opted to replace its Sea Kings with a modular "Crowsnest" system that can be fitted to any of their Merlin HM2 fleet. The Crowsnest system was partially based upon the Sea King ASaC7's equipment; an unsuccessful bid by Lockheed Martin had proposed using a new multi-functional sensor for either the AW101 or another aircraft.
The Russian-built Kamov Ka-31 is deployed by the Indian Navy on the aircraft carriers and and also on s. The Russian Navy has two Ka-31R variants, at least one of which was deployed on their aircraft carrier in 2016. It is fitted with E-801M Oko (Eye) airborne electronic warfare radar that can track 20 targets simultaneously, detecting aircraft up to away, and surface warships up to distant.
| Technology | Military aviation | null |
236666 | https://en.wikipedia.org/wiki/Androgen | Androgen | An androgen (from Greek andr-, the stem of the word meaning ) is any natural or synthetic steroid hormone that regulates the development and maintenance of male characteristics in vertebrates by binding to androgen receptors. This includes the embryological development of the primary male sex organs, and the development of male secondary sex characteristics at puberty. Androgens are synthesized in the testes, the ovaries, and the adrenal glands.
Androgens increase in both males and females during puberty. The major androgen in males is testosterone. Dihydrotestosterone (DHT) and androstenedione are of equal importance in male development. DHT in utero causes differentiation of the penis, scrotum and prostate. In adulthood, DHT contributes to balding, prostate growth, and sebaceous gland activity.
Although androgens are commonly thought of only as male sex hormones, females also have them, but at lower levels: they function in libido and sexual arousal. Androgens are the precursors to estrogens in both men and women.
In addition to their role as natural hormones, androgens are used as medications; for information on androgens as medications, see the androgen replacement therapy and anabolic steroid articles.
Types and examples
The main subset of androgens, known as adrenal androgens, is composed of 19-carbon steroids synthesized in the zona reticularis, the innermost layer of the adrenal cortex. Adrenal androgens function as weak steroids (though some are precursors), and the subset includes dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), androstenedione (A4), and androstenediol (A5).
Besides testosterone, other androgens include:
Dehydroepiandrosterone (DHEA) is a steroid hormone produced in the adrenal cortex from cholesterol. It is the primary precursor of both the androgen and estrogen sex hormones. DHEA is also called dehydroisoandrosterone or dehydroandrosterone.
Androstenedione (A4) is an androgenic steroid produced by the testes, adrenal cortex, and ovaries. While androstenedione is converted metabolically to testosterone and other androgens, it is also the parent structure of estrone. Use of androstenedione as an athletic or bodybuilding supplement has been banned by the International Olympic Committee, as well as other sporting organizations.
Androstenediol (A5) is a steroid metabolite of DHEA and the precursor to sex hormones testosterone and estradiol.
Androsterone is a chemical byproduct created during the breakdown of androgens, or derived from progesterone, that also exerts minor masculinising effects, but with one-seventh the intensity of testosterone. It is found in approximately equal amounts in the plasma and urine of both males and females.
Dihydrotestosterone (DHT) is a metabolite of testosterone, and a more potent androgen than testosterone in that it binds more strongly to androgen receptors. It is produced in the skin and reproductive tissue.
A4 and testosterone can also have an extra hydroxyl (-OH) or keton (=O) group bound on position 11. In this case you can have 11-hydroxyandrostenedione, 11-ketoandrostenedione, 11-hydroxytestosterone, and 11-ketotestosterone. The latter has the same biological activity as testosterone and, therefore, these are also very important in healthy individuals and patients with diseases like, congenital adrenal hyperplasia, polycystic ovarian syndrome, or premature adrenarche.
Determined by consideration of all biological assay methods ():
Female ovarian and adrenal androgens
The ovaries and adrenal glands also produce androgens, but at much lower levels than the testes. Regarding the relative contributions of ovaries and adrenal glands to female androgen levels, in a study with six menstruating women the following observations have been made:
Adrenal contribution to peripheral T, DHT, A, DHEA and DHEA-S is relatively constant throughout the menstrual cycle.
Ovarian contribution of peripheral T, A and DHEA-S reaches maximum levels at mid-cycle, whereas ovarian contribution to peripheral DHT and DHEA does not seem to be influenced by the menstrual cycle.
Ovary and adrenal cortex contribute equally to peripheral T, DHT and A, with the exception that at mid-cycle ovarian contribution of peripheral A is twice that of the adrenal.
Peripheral DHEA and DHEA-S are produced mainly in the adrenal cortex which provides 80% of DHEA and over 90% of DHEA-S.
Biological function
Male prenatal development
Testes formation
During mammalian development, the gonads are at first capable of becoming either ovaries or testes. In humans, starting at about week 4, the gonadal rudiments are present within the intermediate mesoderm adjacent to the developing kidneys. At about week 6, epithelial sex cords develop within the forming testes and incorporate the germ cells as they migrate into the gonads. In males, certain Y chromosome genes, particularly SRY, control development of the male phenotype, including conversion of the early bipotential gonad into testes. In males, the sex cords fully invade the developing gonads.
Androgen production
The mesoderm-derived epithelial cells of the sex cords in developing testes become the Sertoli cells, which will function to support sperm cell formation. A minor population of nonepithelial cells appear between the tubules by week 8 of human fetal development. These are Leydig cells. Soon after they differentiate, Leydig cells begin to produce androgens.
Androgen effects
The androgens function as paracrine hormones required by the Sertoli cells to support sperm production. They are also required for the masculinization of the developing male fetus (including penis and scrotum formation). Under the influence of androgens, remnants of the mesonephron, the Wolffian ducts, develop into the epididymis, vas deferens and seminal vesicles. This action of androgens is supported by a hormone from Sertoli cells, Müllerian inhibitory hormone (MIH), which prevents the embryonic Müllerian ducts from developing into fallopian tubes and other female reproductive tract tissues in male embryos. MIH and androgens cooperate to allow for movement of testes into the scrotum.
Early regulation
Before the production of the pituitary hormone luteinizing hormone (LH) by the embryo starting at about weeks 11–12, human chorionic gonadotrophin (hCG) promotes the differentiation of Leydig cells and their production of androgens at week 8. Androgen action in target tissues often involves conversion of testosterone to 5α-dihydrotestosterone (DHT).
Male pubertal development
At the time of puberty, androgen levels increase dramatically in males, and androgens mediate the development of masculine secondary sexual characteristics as well as the activation of spermatogenesis and fertility and masculine behavioral changes such as increased sex drive. Masculine secondary sexual characteristics include androgenic hair, voice deepening, emergence of the Adam's apple, broadening of the shoulders, increased muscle mass, and penile growth.
Spermatogenesis
During puberty, androgen, LH and follicle stimulating hormone (FSH) production increase and the sex cords hollow out, forming the seminiferous tubules, and the germ cells start to differentiate into sperm. Throughout adulthood, androgens and FSH cooperatively act on Sertoli cells in the testes to support sperm production. Exogenous androgen supplements can be used as a male contraceptive. Elevated androgen levels caused by use of androgen supplements can inhibit production of LH and block production of endogenous androgens by Leydig cells. Without the locally high levels of androgens in testes due to androgen production by Leydig cells, the seminiferous tubules can degenerate, resulting in infertility. For this reason, many transdermal androgen patches are applied to the scrotum.
Fat deposition
Males typically have less body fat than females. Recent results indicate androgens inhibit the ability of some fat cells to store lipids by blocking a signal transduction pathway that normally supports adipocyte function. Also, androgens, but not estrogens, increase beta adrenergic receptors while decreasing alpha adrenergic receptors- which results in increased levels of epinephrine/ norepinephrine due to lack of alpha-2 receptor negative feedback and decreased fat accumulation due to epinephrine/ norepinephrine then acting on lipolysis-inducing beta receptors.
Muscle mass
Males typically have more skeletal muscle mass than females. Androgens promote the enlargement of skeletal muscle cells in a coordinated manner by acting on several cell types in skeletal muscle tissue. One cell type, called the myoblast, conveys androgen receptors for generating muscle. Fusion of myoblasts generates myotubes, in a process linked to androgen receptor levels. Higher androgen levels lead to increased expression of androgen receptor.
Brain
Circulating levels of androgens can influence human behavior because some neurons are sensitive to steroid hormones. Androgen levels have been implicated in the regulation of human aggression and libido. Indeed, androgens are capable of altering the structure of the brain in several species, including mice, rats, and primates, producing sex differences. Although more recent studies showing the general mood of transgender men, who have undergone transgender hormone replacement therapy replacing estrogens with androgens, do not show any substantial long-term behavioral changes.
Numerous reports have shown androgens alone are capable of altering the structure of the brain, but identification of which alterations in neuroanatomy stem from androgens or estrogens is difficult, because of their potential for conversion.
Evidence from neurogenesis (formation of new neurons) studies on male rats has shown that the hippocampus is a useful brain region to examine when determining the effects of androgens on behavior. To examine neurogenesis, wild-type male rats were compared with male rats that had androgen insensitivity syndrome, a genetic difference resulting in complete or partial insensitivity to androgens and a lack of external male genitalia.
Neural injections of Bromodeoxyuridine (BrdU) were applied to males of both groups to test for neurogenesis. Analysis showed that testosterone and dihydrotestosterone regulated adult hippocampal neurogenesis (AHN). Adult hippocampal neurogenesis was regulated through the androgen receptor in the wild-type male rats, but not in the TMF male rats. To further test the role of activated androgen receptors on AHN, flutamide, an antiandrogen drug that competes with testosterone and dihydrotestosterone for androgen receptors, and dihydrotestosterone were administered to normal male rats. Dihydrotestosterone increased the number of BrdU cells, while flutamide inhibited these cells.
Moreover, estrogens had no effect. This research demonstrates how androgens can increase AHN.
Researchers also examined how mild exercise affected androgen synthesis which in turn causes AHN activation of N-methyl-D-aspartate (NMDA) receptors.
NMDA induces a calcium flux that allows for synaptic plasticity which is crucial for AHN.
Researchers injected both orchidectomized (ORX) (castrated) and sham castrated male rats with BrdU to determine if the number of new cells was increased. They found that AHN in male rats is increased with mild exercise by boosting synthesis of dihydrotestosterone in the hippocampus.
Again it was noted that AHN was not increased via activation of the estrogen receptors.
Androgen regulation decreases the likelihood of depression in males. In preadolescent male rats, neonatal rats treated with flutamide developed more depression-like symptoms compared to control rats.
Again BrdU was injected into both groups of rats in order to see if cells were multiplying in the living tissue. These results demonstrate how the organization of androgens has a positive effect on preadolescent hippocampal neurogenesis that may be linked with lower depression-like symptoms.
Social isolation has a hindering effect in AHN whereas normal regulation of androgens increases AHN. A study using male rats showed that testosterone may block social isolation, which results in hippocampal neurogenesis reaching homeostasis—regulation that keeps internal conditions stable. A Brdu analysis showed that excess testosterone did not increase this blocking effect against social isolation; that is, the natural circulating levels of androgens cancel out the negative effects of social isolation on AHN.
Female-specific effects
Androgens have potential roles in relaxation of the myometrium via non-genomic, androgen receptor-independent pathways, preventing premature uterine contractions in pregnancy.
Androgen insensitivity
Reduced ability of an XY-karyotype fetus to respond to androgens can result in one of several conditions, including infertility and several forms of intersex conditions.
Miscellaneous
Yolk androgen levels in certain birds have been positively correlated to social dominance later in life. See American coot.
Biological activity
Androgens bind to and activate androgen receptors (ARs) to mediate most of their biological effects.
Relative potency
Determined by consideration of all biological assay methods ():
5α-Dihydrotestosterone (DHT) was 2.4 times more potent than testosterone at maintaining normal prostate weight and duct lumen mass (this is a measure of epithelial cell function stimulation). Whereas DHT was equally potent as testosterone at preventing prostate cell death after castration.
One of the 11-oxygenated androgens, namely 11-ketotestosterone, has the same potency as testosterone.
Non-genomic actions
Androgens have also been found to signal through membrane androgen receptors, which are distinct from the classical nuclear androgen receptor.
Biochemistry
Biosynthesis
Androgens are synthesized from cholesterol and are produced primarily in the gonads (testicles and ovaries) and also in the adrenal glands. The testicles produce a much higher quantity than the ovaries. Conversion of testosterone to the more potent DHT occurs in prostate gland, liver, brain and skin.
Metabolism
Androgens are metabolized mainly in the liver.
Medical uses
A low testosterone level (hypogonadism) in men may be treated with testosterone administration. Prostate cancer may be treated by removing the major source of testosterone: testicle removal (orchiectomy); or agents which block androgens from accessing their receptor: antiandrogens.
| Biology and health sciences | Animal hormones | Biology |
236674 | https://en.wikipedia.org/wiki/Ayurveda | Ayurveda | Ayurveda (; ) is an alternative medicine system with historical roots in the Indian subcontinent. It is heavily practiced throughout India and Nepal, where as much as 80% of the population report using ayurveda. The theory and practice of ayurveda is pseudoscientific and toxic metals such as lead are used as ingredients in many ayurvedic medicines.
Ayurveda therapies have varied and evolved over more than two millennia. Therapies include herbal medicines, special diets, meditation, yoga, massage, laxatives, enemas, and medical oils. Ayurvedic preparations are typically based on complex herbal compounds, minerals, and metal substances (perhaps under the influence of early Indian alchemy or rasashastra). Ancient ayurveda texts also taught surgical techniques, including rhinoplasty, lithotomy, sutures, cataract surgery, and the extraction of foreign objects.
Historical evidence for ayurvedic texts, terminology and concepts appears from the middle of the first millennium BCE onwards. The main classical ayurveda texts begin with accounts of the transmission of medical knowledge from the gods to sages, and then to human physicians. Printed editions of the Sushruta Samhita (Sushruta's Compendium), frame the work as the teachings of Dhanvantari, the Hindu deity of ayurveda, incarnated as King Divodāsa of Varanasi, to a group of physicians, including Sushruta. The oldest manuscripts of the work, however, omit this frame, ascribing the work directly to King Divodāsa.
In ayurveda texts, dosha balance is emphasized, and suppressing natural urges is considered unhealthy and claimed to lead to illness. Ayurveda treatises describe three elemental doshas: vāta, pitta and kapha, and state that balance (Skt. sāmyatva) of the doshas results in health, while imbalance (viṣamatva) results in disease. Ayurveda treatises divide medicine into eight canonical components. Ayurveda practitioners had developed various medicinal preparations and surgical procedures from at least the beginning of the common era.
Ayurveda has been adapted for Western consumption, notably by Baba Hari Dass in the 1970s and Maharishi ayurveda in the 1980s.
Although some Ayurvedic treatments can help relieve the symptoms of cancer, there is no good evidence that the disease can be treated or cured through ayurveda.
Some ayurvedic preparations have been found to contain lead, mercury, and arsenic, substances known to be harmful to humans. A 2008 study found the three substances in close to 21% of U.S. and Indian-manufactured patent ayurvedic medicines sold through the Internet. The public health implications of such metallic contaminants in India are unknown.
Etymology
The term āyurveda () is composed of two words, āyus, , "life" or "longevity", and veda, , "knowledge", translated as "knowledge of longevity" or "knowledge of life and longevity".
Eight components
The earliest classical Sanskrit works on ayurveda describe medicine as being divided into eight components (Skt. aṅga). This characterization of the physician's art, "the medicine that has eight components" (), is first found in the Sanskrit epic the Mahābhārata, c. 4th century BCE. The components are:
Kāyachikitsā: general medicine, medicine of the body
Kaumāra-bhṛtya (Pediatrics): Discussions about prenatal and postnatal care of baby and mother; methods of conception; choosing the child's sex, intelligence, and constitution; childhood diseases; and midwifery
Śalyatantra: surgical techniques and the extraction of foreign objects
Śhālākyatantra: treatment of ailments affecting openings or cavities in the upper body: ears, eyes, nose, mouth, etc.
Bhūtavidyā: pacification of possessing spirits, and the people whose minds are affected by such possession
Agadatantra/Vishagara-vairodh Tantra (Toxicology): includes epidemics; toxins in animals, vegetables and minerals; and keys for recognizing those anomalies and their antidotes
Rasāyantantra: rejuvenation and tonics for increasing lifespan, intellect and strength
Vājīkaraṇatantra: aphrodisiacs; treatments for increasing the volume and viability of semen and sexual pleasure; infertility problems; and spiritual development (transmutation of sexual energy into spiritual energy)
Principles and terminology
The central theoretical ideas of ayurveda show parallels with Samkhya and Vaisheshika philosophies, as well as with Buddhism and Jainism. Balance is emphasized, and suppressing natural urges is considered unhealthy and claimed to lead to illness. For example, to suppress sneezing is said to potentially give rise to shoulder pain. However, people are also cautioned to stay within the limits of reasonable balance and measure when following nature's urges. For example, emphasis is placed on moderation of food intake, sleep, and sexual intercourse.
According to ayurveda, the human body is composed of tissues (dhatus), waste (malas), and humeral biomaterials (doshas). The seven dhatus are chyle (rasa), blood (rakta), muscles (māmsa), fat (meda), bone (asthi), marrow (majja), and semen (shukra). Like the medicine of classical antiquity, the classic treatises of ayurveda divided bodily substances into five classical elements (panchamahabhuta) viz. earth, water, fire, air and ether. There are also twenty gunas (qualities or characteristics) which are considered to be inherent in all matter. These are organized in ten pairs: heavy/light, cold/hot, unctuous/dry, dull/sharp, stable/mobile, soft/hard, non-slimy/slimy, smooth/coarse, minute/gross, and viscous/liquid.
The three postulated elemental bodily humours, the doshas or tridosha, are vata (air, which some modern authors equate with the nervous system), pitta (bile, fire, equated by some with enzymes), and kapha (phlegm, or earth and water, equated by some with mucus). Contemporary critics assert that doshas are not real, but are a fictional concept. The humours (doshas) may also affect mental health. Each dosha has particular attributes and roles within the body and mind; the natural predominance of one or more doshas thus explains a person's physical constitution (prakriti) and personality. Ayurvedic tradition holds that imbalance among the bodily and mental doshas is a major etiologic component of disease. One ayurvedic view is that the doshas are balanced when they are equal to each other, while another view is that each human possesses a unique combination of the doshas which define this person's temperament and characteristics. In either case, it says that each person should modulate their behavior or environment to increase or decrease the doshas and maintain their natural state. Practitioners of ayurveda must determine an individual's bodily and mental dosha makeup, as certain prakriti are said to predispose one to particular diseases. For example, a person who is thin, shy, excitable, has a pronounced Adam's apple, and enjoys esoteric knowledge is likely vata prakriti and therefore more susceptible to conditions such as flatulence, stuttering, and rheumatism. Deranged vata is also associated with certain mental disorders due to excited or excess vayu (gas), although the ayurvedic text Charaka Samhita also attributes "insanity" (unmada) to cold food and possession by the ghost of a sinful Brahman (brahmarakshasa).
Ama (a Sanskrit word meaning "uncooked" or "undigested") is used to refer to the concept of anything that exists in a state of incomplete transformation. With regards to oral hygiene, it is claimed to be a toxic byproduct generated by improper or incomplete digestion. The concept has no equivalent in standard medicine.
In medieval taxonomies of the Sanskrit knowledge systems, ayurveda is assigned a place as a subsidiary Veda (upaveda). Some medicinal plant names from the Atharvaveda and other Vedas can be found in subsequent ayurveda literature. Some other school of thoughts considers 'ayurveda' as the 'Fifth Veda'. The earliest recorded theoretical statements about the canonical models of disease in ayurveda occur in the earliest Buddhist Canon.
Practice
Ayurvedic practitioners regard physical existence, mental existence, and personality as three separate elements of a whole person with each element being able to influence the others. This holistic approach used during diagnosis and healing is a fundamental aspect of ayurveda. Another part of ayurvedic treatment says that there are channels (srotas) which transport fluids, and that the channels can be opened up by massage treatment using oils and Swedana (fomentation). Unhealthy, or blocked, channels are thought to cause disease.
Diagnosis
Ayurveda has eight ways to diagnose illness, called nadi (pulse), mootra (urine), mala (stool), jihva (tongue), shabda (speech), sparsha (touch), druk (vision), and aakruti (appearance). Ayurvedic practitioners approach diagnosis by using the five senses. For example, hearing is used to observe the condition of breathing and speech. The study of vulnerable points, or marma, is particular to ayurvedic medicine.
Treatment and prevention
Two of the eight branches of classical ayurveda deal with surgery (Śalya-cikitsā and Śālākya-tantra), but contemporary ayurveda tends to stress attaining vitality by building a healthy metabolic system and maintaining good digestion and excretion. Ayurveda also focuses on exercise, yoga, and meditation. One type of prescription is a Sattvic diet.
Ayurveda follows the concept of Dinacharya, which says that natural cycles (waking, sleeping, working, meditation etc.) are important for health. Hygiene, including regular bathing, cleaning of teeth, oil pulling, tongue scraping, skin care, and eye washing, is also a central practice.
Substances used
The vast majority (90%) of ayurvedic remedies are plant based. Plant-based treatments in ayurveda may be derived from roots, leaves, fruits, bark, or seeds; some examples of plant-based substances include cardamom and cinnamon. In the 19th century, William Dymock and co-authors summarized hundreds of plant-derived medicines along with the uses, microscopic structure, chemical composition, toxicology, prevalent myths and stories, and relation to commerce in British India. Triphala, an herbal formulation of three fruits, Amalaki, Bibhitaki, and Haritaki, is one of the most commonly used Ayurvedic remedies. The herbs Withania somnifera (Ashwagandha) and Ocimum tenuiflorum (Tulsi) are also routinely used in ayurveda.
Animal products used in ayurveda include milk, bones, and gallstones. In addition, fats are prescribed both for consumption and for external use. Consumption of minerals, including sulphur, arsenic, lead, copper sulfate and gold, are also prescribed. The addition of minerals to herbal medicine is called rasashastra.
Ayurveda uses alcoholic beverages called Madya, which are said to adjust the doshas by increasing pitta and reducing vatta and kapha. Madya are classified by the raw material and fermentation process, and the categories include: sugar-based, fruit-based, cereal-based, cereal-based with herbs, fermentated with vinegar, and tonic wines. The intended outcomes can include causing purgation, improving digestion or taste, creating dryness, or loosening joints. Ayurvedic texts describe Madya as non-viscid and fast-acting, and say that it enters and cleans minute pores in the body.
Purified opium is used in eight ayurvedic preparations and is said to balance the vata and kapha doshas and increase the pitta dosha. It is prescribed for diarrhea and dysentery, for increasing the sexual and muscular ability, and for affecting the brain. The sedative and pain-relieving properties of opium are considered in ayurveda. The use of opium is found in the ancient ayurvedic texts, and is first mentioned in the Sarngadhara Samhita (1300–1400 CE), a book on pharmacy used in Rajasthan in Western India, as an ingredient of an aphrodisiac to delay male ejaculation. It is possible that opium was brought to India along with or before Muslim conquests. The book Yoga Ratnakara (1700–1800 CE, unknown author), which is popular in Maharashtra, uses opium in a herbal-mineral composition prescribed for diarrhea. In the Bhaisajya Ratnavali, opium and camphor are used for acute gastroenteritis. In this drug, the respiratory depressant action of opium is counteracted by the respiratory stimulant property of camphor. Later books have included the narcotic property for use as analgesic pain reliever.
Cannabis indica is also mentioned in the ancient ayurveda books, and is first mentioned in the Sarngadhara Samhita as a treatment for diarrhea. In the Bhaisajya Ratnavali it is named as an ingredient in an aphrodisiac.
Ayurveda says that both oil and tar can be used to stop bleeding, and that traumatic bleeding can be stopped by four different methods: ligation of the blood vessel, cauterisation by heat, use of preparations to facilitate clotting, and use of preparations to constrict the blood vessels.
Massage with oil is commonly prescribed by ayurvedic practitioners. Oils are used in a number of ways, including regular consumption, anointing, smearing, head massage, application to affected areas, and oil pulling. Liquids may also be poured on the patient's forehead, a technique called shirodhara.
Panchakarma
According to ayurveda, panchakarma are techniques to eliminate toxic elements from the body. Panchakarma refers to five actions, which are meant to be performed in a designated sequence with the stated aim of restoring balance in the body through a process of purgation.
Current status
Ayurveda is widely practiced in India and Nepal where public institutions offer formal study in the form of a Bachelor of Ayurvedic Medicine and Surgery (BAMS) degree. In certain parts of the world, the legal standing of practitioners is equivalent to that of conventional medicine. Several scholars have described the contemporary Indian application of ayurvedic practice as being "biomedicalized" relative to the more "spiritualized" emphasis to practice found in variants in the West.
Exposure to European developments in medicine from the nineteenth century onwards, through European colonization of India and the subsequent institutionalized support for European forms of medicine amongst European heritage settlers in India were challenging to ayurveda, with the entire epistemology called into question. From the twentieth century, ayurveda became politically, conceptually, and commercially dominated by modern biomedicine, resulting in "modern ayurveda" and "global ayurveda". Modern ayurveda is geographically located in the Indian subcontinent and tends towards secularization through minimization of the magic and mythic aspects of ayurveda. Global ayurveda encompasses multiple forms of practice that developed through dispersal to a wide geographical area outside of India. Smith and Wujastyk further delineate that global ayurveda includes those primarily interested in the ayurveda pharmacopeia, and also the practitioners of New Age ayurveda (which may link ayurveda to yoga and Indian spirituality and/or emphasize preventative practice, mind body medicine, or Maharishi ayurveda).
Since the 1980s, ayurveda has also become the subject of interdisciplinary studies in ethnomedicine which seeks to integrate the biomedical sciences and humanities to improve the pharmacopeia of ayurveda. According to industry research, the global ayurveda market was worth US$4.5 billion in 2017.
The Indian subcontinent
India
It was reported in 2008 and again in 2018 that 80 percent of people in India used ayurveda exclusively or combined with conventional Western medicine. A 2014 national health survey found that, in general, forms of the Indian system of medicine or AYUSH (ayurveda, yoga and naturopathy, unani, siddha, and homeopathy) were used by about 3.5% of patients who were seeking outpatient care over a two-week reference period.
In 1970, the Parliament of India passed the Indian Medical Central Council Act which aimed to standardise qualifications for ayurveda practitioners and provide accredited institutions for its study and research. In 1971, the Central Council of Indian Medicine (CCIM) was established under the Department of Ayurveda, Yoga and Naturopathy, Unani, Siddha medicine and Homoeopathy (AYUSH), Ministry of Health and Family Welfare, to monitor higher education in ayurveda in India. The Indian government supports research and teaching in ayurveda through many channels at both the national and state levels, and helps institutionalise traditional medicine so that it can be studied in major towns and cities. The state-sponsored Central Council for Research in Ayurvedic Sciences (CCRAS) is designed to do research on ayurveda. Many clinics in urban and rural areas are run by professionals who qualify from these institutes. , India had over 180 training centers that offered degrees in traditional ayurvedic medicine.
To fight biopiracy and unethical patents, the government of India set up the Traditional Knowledge Digital Library in 2001 to serve as a repository for formulations from systems of Indian medicine, such as ayurveda, unani and siddha medicine. The formulations come from over 100 traditional ayurveda books.
An Indian Academy of Sciences document quoting a 2003–04 report states that India had 432,625 registered medical practitioners, 13,925 dispensaries, 2,253 hospitals and a bed strength of 43,803. 209 undergraduate teaching institutions and 16 postgraduate institutions. In 2012, it was reported that insurance companies covered expenses for ayurvedic treatments in case of conditions such as spinal cord disorders, bone disorder, arthritis and cancer. Such claims constituted 5–10 percent of the country's health insurance claims.
Maharashtra Andhashraddha Nirmoolan Samiti, an organisation dedicated to fighting superstition in India, considers ayurveda to be pseudoscience.
On 9 November 2014, India formed the Ministry of AYUSH. National Ayurveda Day is also observed in India on the birth of Dhanvantari that is Dhanteras.
In 2016, the World Health Organization (WHO) published a report titled "The Health Workforce in India" which found that 31 percent of those who claimed to be doctors in India in 2001 were educated only up to the secondary school level and 57 percent went without any medical qualification. The WHO study found that the situation was worse in rural India with only 18.8 percent of doctors holding a medical qualification. Overall, the study revealed that nationally the density of all doctors (mainstream, ayurvedic, homeopathic and unani) was 8 doctors per 10,000 people compared to 13 per 10,000 people in China.
Nepal
About 75% to 80% of the population of Nepal use ayurveda. As of 2009, ayurveda was considered to be the most common and popular form of medicine in Nepal.
Sri Lanka
The Sri Lankan tradition of ayurveda is similar to the Indian tradition. Practitioners of ayurveda in Sri Lanka refer to Sanskrit texts which are common to both countries. However, they do differ in some aspects, particularly in the herbs used.
In 1980, the Sri Lankan government established a Ministry of Indigenous Medicine to revive and regulate ayurveda. The Institute of Indigenous Medicine (affiliated to the University of Colombo) offers undergraduate, postgraduate, and MD degrees in ayurveda medicine and surgery, and similar degrees in unani medicine. In 2010, the public system had 62 ayurvedic hospitals and 208 central dispensaries, which served about 3 million people (about 11% of Sri Lanka's population). There are an estimated 20,000 registered practitioners of ayurveda in Sri Lanka.
According to the Mahavamsa, an ancient chronicle of Sinhalese royalty from the sixth century CE, King Pandukabhaya (reigned 437 BCE to 367 BCE) had lying-in-homes and ayurvedic hospitals (Sivikasotthi-Sala) built in various parts of the country. This is the earliest documented evidence available of institutions dedicated specifically to the care of the sick anywhere in the world. The hospital at Mihintale is the oldest in the world.
Outside the Indian subcontinent
Ayurveda is a system of traditional medicine developed during antiquity and the medieval period, and as such is comparable to pre-modern Chinese and European systems of medicine. In the 1960s, ayurveda began to be advertised as alternative medicine in the Western world. Due to different laws and medical regulations around the globe, the expanding practice and commercialisation of ayurveda raised ethical and legal issues. Ayurveda was adapted for Western consumption, particularly by Baba Hari Dass in the 1970s and by Maharishi Ayurveda in the 1980s. In some cases, this involved active fraud on the part of proponents of ayurveda in an attempt to falsely represent the system as equal to the standards of modern medical research.
United States
Baba Hari Dass was an early proponent who helped bring ayurveda to the United States in the early 1970s. His teachings led to the establishment of the Mount Madonna Institute. He invited several notable ayurvedic teachers, including Vasant Lad, Sarita Shrestha, and Ram Harsh Singh. The ayurvedic practitioner Michael Tierra wrote that the "history of Ayurveda in North America will always owe a debt to the selfless contributions of Baba Hari Dass".
In the United States, the practice of ayurveda is not licensed or regulated by any state. The National Center for Complementary and Integrative Health (NCCIH) stated that "Few well-designed clinical trials and systematic research reviews suggest that Ayurvedic approaches are effective". The NCCIH warned against the issue of heavy metal poisoning, and emphasised the use of conventional health providers first. As of 2018, the NCCIH reported that 240,000 Americans were using ayurvedic medicine.
Europe
The first ayurvedic clinic in Switzerland was opened in 1987 by Maharishi Mahesh Yogi. In 2015, the government of Switzerland introduced a federally recognized diploma in ayurveda.
Classification and efficacy
Ayurvedic medicine is considered pseudoscientific because its premises are not based on science. Both the lack of scientific soundness in the theoretical foundations of ayurveda and the quality of research have been criticized.
Although laboratory experiments suggest that some herbs and substances in ayurveda might be developed into effective treatments, there is no evidence that any are effective in themselves. There is no good evidence that ayurvedic medicine is effective to treat or cure cancer in people. Although ayurveda may help "improve quality of life" and Cancer Research UK also acknowledges that "researchers have found that some Ayurvedic treatments can help relieve cancer symptoms", the organization warns that some ayurvedic drugs contain toxic substances or may interact with legitimate cancer drugs in a harmful way.
Ethnologist Johannes Quack writes that although the rationalist movement Maharashtra Andhashraddha Nirmoolan Samiti officially labels ayurveda a pseudoscience akin to astrology, these practices are in fact embraced by many of the movement's members.
A review of the use of ayurveda for cardiovascular disease concluded that the evidence is not convincing for the use of any ayurvedic herbal treatment for heart disease or hypertension, but that many herbs used by ayurvedic practitioners could be appropriate for further research.
Research
In India, research in ayurveda is undertaken by the Ministry of AYUSH through a national network of research institutes.
In Nepal, the National Ayurvedic Training and Research Centre (NATRC) researches medicinal herbs in the country.
In Sri Lanka, the Ministry of Health, Nutrition and Indigenous Medicine looks after the research in ayurveda through various national research institutes.
Use of toxic metals
Rasashastra, the practice of adding metals, minerals or gems to herbal preparations, may include toxic heavy metals such as lead, mercury and arsenic. The public health implications of metals in rasashastra in India is unknown. Adverse reactions to herbs are described in traditional ayurvedic texts, but practitioners are reluctant to admit that herbs could be toxic and that reliable information on herbal toxicity is not readily available. There is a communication gap between practitioners of medicine and ayurveda.
Some traditional Indian herbal medicinal products contain harmful levels of heavy metals, including lead. For example, ghasard, a product commonly given to infants for digestive issues, has been found to have up to 1.6% lead concentration by weight, leading to lead encephalopathy. A 1990 study on ayurvedic medicines in India found that 41% of the products tested contained arsenic, and that 64% contained lead and mercury. A 2004 study found toxic levels of heavy metals in 20% of ayurvedic preparations made in South Asia and sold in the Boston area, and concluded that ayurvedic products posed serious health risks and should be tested for heavy-metal contamination. A 2008 study of more than 230 products found that approximately 20% of remedies (and 40% of rasashastra medicines) purchased over the Internet from U.S. and Indian suppliers contained lead, mercury or arsenic. A 2015 study of users in the United States found elevated blood lead levels in 40% of those tested, leading physician and former U.S. Air Force flight surgeon Harriet Hall to say that "Ayurveda is basically superstition mixed with a soupçon of practical health advice. And it can be dangerous." A 2022 study found that ayurvedic preparations purchased over-the-counter in Chandigarh, India, had levels of zinc, mercury, arsenic and lead over the limits set by the Food and Agriculture Organisation / World Health Organisation. 83% exceeded the limit for zinc, 69% for mercury, 14% for arsenic and 5% for lead.
Heavy metals are thought of as active ingredients by advocates of Indian herbal medicinal products. According to ancient ayurvedic texts, certain physico-chemical purification processes such as samskaras or shodhanas (for metals) 'detoxify' the heavy metals in it. These are similar to the Chinese pao zhi, although the ayurvedic techniques are more complex and may involve physical pharmacy techniques as well as mantras. However, these products have nonetheless caused severe lead poisoning and other toxic effects. Between 1978 and 2008, "more than 80 cases of lead poisoning associated with Ayurvedic medicine use [were] reported worldwide". In 2012, the U.S. Centers for Disease Control and Prevention (CDC) linked ayurvedic drugs to lead poisoning, based on cases where toxic materials were found in the blood of pregnant women who had taken ayurvedic drugs.
Ayurvedic practitioners argue that the toxicity of bhasmas (ash products) comes from improper manufacturing processes, contaminants, improper use of ayurvedic medicine, quality of raw materials and that the end products and improper procedures are used by charlatans.
In India, the government ruled that ayurvedic products must be labelled with their metallic content. However, in Current Science, a publication of the Indian Academy of Sciences, M. S. Valiathan said that "the absence of post-market surveillance and the paucity of test laboratory facilities [in India] make the quality control of Ayurvedic medicines exceedingly difficult at this time". In the United States, most ayurvedic products are marketed without having been reviewed or approved by the FDA. Since 2007, the FDA has placed an import alert on some ayurvedic products in order to prevent them from entering the United States. A 2012 toxicological review of mercury-based traditional herbo-metallic preparations concluded that the long-term pharmacotherapeutic and in-depth toxicity studies of these preparations are lacking.
History
Some scholars assert that the concepts of traditional ayurvedic medicine have existed since the times of the Indus Valley civilisation but since the Indus script has not been deciphered, such assertions are moot. The Atharvaveda contains hymns and prayers aimed at curing disease. There are various legendary accounts of the origin of ayurveda, such as that it was received by Dhanvantari (or Divodasa) from Brahma. Tradition also holds that the writings of ayurveda were influenced by a lost text by the sage Agnivesha.
Ayurveda is one of the few systems of medicine developed in ancient times that is still widely practised in modern times. As such, it is open to the criticism that its conceptual basis is obsolete and that its contemporary practitioners have not taken account of the developments in medicine. Responses to this situation led to an impassioned debate in India during the early decades of the twentieth century, between proponents of unchanging tradition (śuddha "pure" ayurveda) and those who thought ayurveda should modernize and syncretize (aśuddha "impure, tainted" ayurveda). The political debate about the place of ayurveda in contemporary India has continued to the present, both in the public arena and in government. Debate about the place of ayurvedic medicine in the contemporary internationalized world also continues today.
Main texts
Many ancient works on ayurvedic medicine are lost to posterity, but manuscripts of three principal early texts on ayurveda have survived to the present day. These works are the Charaka Samhita, the Sushruta Samhita and the Bhela Samhita. The dating of these works is historically complicated since they each internally present themselves as composite works compiled by several editors. All past scholarship on their dating has been evaluated by Meulenbeld in volumes IA and IB of his History of Indian Medical Literature. After considering the evidence and arguments concerning the Suśrutasaṃhitā, Meulenbeld stated (IA, 348), The Suśrutasaṃhitā is most probably the work of an unknown author who drew much of the material he incorporated in his treatise from a multiplicity of earlier sources from various periods. This may explain that many scholars yield to the temptation to recognize a number of distinct layers and, consequently, try to identify elements belonging to them. As we have seen, the identification of features thought to belong to a particular stratum is in many cases determined by preconceived ideas on the age of the strata and their supposed authors. The dating of this work to 600 BCE was first proposed by Hoernle over a century ago, but has long since been overturned by subsequent historical research. The current consensus amongst medical historians of South Asia is that the Suśrutasaṃhitā was compiled over a period of time starting with a kernel of medical ideas from the century or two BCE and then being revised by several hands into its present form by about 500 CE. The view that the text was updated by the Buddhist scholar Nagarjuna in the 2nd century CE has been disproved, although the last chapter of the work, the Uttaratantra, was added by an unknown later author before 500 CE.
Similar arguments apply to the Charaka Samhita, written by Charaka, and the Bhela Samhita, attributed to Atreya Punarvasu, that are also dated to the 6th century BCE by non-specialist scholars but are in fact, in their present form, datable to a period between the second and fifth centuries CE. The Charaka Samhita was also updated by Dridhabala during the early centuries of the Common Era.
The Bower Manuscript (dated to the early 6th century CE) includes of excerpts from the Bheda Samhita and its description of concepts in Central Asian Buddhism. In 1987, A. F. R. Hoernle identified the scribe of the medical portions of the manuscript to be a native of India using a northern variant of the Gupta script, who had migrated and become a Buddhist monk in a monastery in Kucha. The Chinese pilgrim Fa Hsien (c. 337–422 CE) wrote about the healthcare system of the Gupta empire (320–550) and described the institutional approach of Indian medicine. This is also visible in the works of Charaka, who describes hospitals and how they should be equipped.
Some dictionaries of materia medica include Astanga nighantu (8th century) by Vagbhata, Paryaya ratnamala (9th century) by Madhava, Siddhasara nighantu (9th century) by Ravi Gupta, Dravyavali (10th century), and Dravyaguna sangraha (11th century) by Chakrapani Datta, among others.
Illnesses portrayed
Underwood and Rhodes state that the early forms of traditional Indian medicine identified fever, cough, consumption, diarrhea, dropsy, abscesses, seizures, tumours, and leprosy, and that treatments included plastic surgery, lithotomy, tonsillectomy, couching (a form of cataract surgery), puncturing to release fluids in the abdomen, extraction of foreign bodies, treatment of anal fistulas, treating fractures, amputations, cesarean sections, and stitching of wounds. The use of herbs and surgical instruments became widespread. During this period, treatments were also prescribed for complex ailments, including angina pectoris, diabetes, hypertension, and stones.
Further development and spread
Ayurveda flourished throughout the Indian Middle Ages. Dalhana (fl. 1200), Sarngadhara (fl. 1300) and Bhavamisra (fl. 1500) compiled works on Indian medicine. The medical works of both Sushruta and Charaka were also translated into the Chinese language in the 5th century, and during the 8th century, they were translated into the Arabic and Persian language. The 9th-century Persian physician Muhammad ibn Zakariya al-Razi was familiar with the text. The Arabic works derived from the ayurvedic texts eventually also reached Europe by the 12th century. In Renaissance Italy, the Branca family of Sicily and Gaspare Tagliacozzi (Bologna) were influenced by the Arabic reception of the Sushruta's surgical techniques.
British physicians traveled to India to observe rhinoplasty being performed using Indian methods, and reports on their rhinoplasty methods were published in the Gentleman's Magazine in 1794. Instruments described in the Sushruta Samhita were further modified in Europe. Joseph Constantine Carpue studied plastic surgery methods in India for 20 years and, in 1815, was able to perform the first major rhinoplasty surgery in the western world, using the "Indian" method of nose reconstruction. In 1840 Brett published an article about this technique.
The British had shown some interest in understanding local medicinal practices in the early nineteenth century. A Native Medical Institution was setup in 1822 where both indigenous and European medicine were taught. After the English Education Act 1835, their policy changed to champion European medicine and disparage local practices. After Indian independence, there was more focus on ayurveda and other traditional medical systems. Ayurveda became a part of the Indian National healthcare system, with state hospitals for ayurveda established across the country. However, the treatments of traditional medicines were not always integrated with others.
| Biology and health sciences | Alternative and traditional medicine | null |
236856 | https://en.wikipedia.org/wiki/Deinonychus | Deinonychus | Deinonychus ( ; ) is a genus of dromaeosaurid theropod dinosaur with one described species, Deinonychus antirrhopus. This species, which could grow up to long, lived during the early Cretaceous Period, about 115–108 million years ago (from the mid-Aptian to early Albian stages). Fossils have been recovered from the U.S. states of Montana, Utah, Wyoming, and Oklahoma, in rocks of the Cloverly Formation and Antlers Formation, though teeth that may belong to Deinonychus have been found much farther east in Maryland.
Paleontologist John Ostrom's study of Deinonychus in the late 1960s revolutionized the way scientists thought about dinosaurs, leading to the "dinosaur renaissance" and igniting the debate on whether dinosaurs were warm-blooded or cold-blooded. Before this, the popular conception of dinosaurs had been one of plodding, reptilian giants. Ostrom noted the small body, sleek, horizontal posture, ratite-like spine, and especially the enlarged raptorial claws on the feet, which suggested an active, agile predator.
"Terrible claw" refers to the unusually large, sickle-shaped talon on the second toe of each hind foot. The fossil YPM 5205 preserves a large, strongly curved ungual. In life, archosaurs have a horny sheath over this bone, which extends the length. Ostrom looked at crocodile and bird claws and reconstructed the claw for YPM 5205 as over long. The species name antirrhopus means "counter balance", which refers to Ostrom's idea about the function of the tail. As in other dromaeosaurids, the tail vertebrae have a series of ossified tendons and super-elongated bone processes. These features seemed to make the tail into a stiff counterbalance, but a fossil of the very closely related Velociraptor mongoliensis (IGM 100/986) has an articulated tail skeleton that is curved laterally in a long S-shape. This suggests that, in life, the tail could bend to the sides with a high degree of flexibility. In both the Cloverly and Antlers formations, Deinonychus remains have been found closely associated with those of the ornithopod Tenontosaurus. Teeth discovered associated with Tenontosaurus specimens imply they were hunted, or at least scavenged upon, by Deinonychus.
Discovery and naming
Fossilized remains of Deinonychus have been recovered from the Cloverly Formation of Montana and Wyoming and in the roughly contemporary Antlers Formation of Oklahoma, in North America. The Cloverly formation has been dated to the late Aptian through early Albian stages of the early Cretaceous, about 115 to 108 Ma. Additionally, teeth found in the Arundel Clay Facies (mid-Aptian), of the Potomac Formation on the Atlantic Coastal Plain of Maryland may be assigned to the genus.
The first remains were uncovered in 1931 in southern Montana near the town of Billings. The team leader, paleontologist Barnum Brown, was primarily concerned with excavating and preparing the remains of the ornithopod dinosaur Tenontosaurus, but in his field report from the dig site to the American Museum of Natural History, he reported the discovery of a small carnivorous dinosaur close to a Tenontosaurus skeleton, "but encased in lime difficult to prepare." He informally called the animal "Daptosaurus agilis" and made preparations for describing it and having the skeleton, specimen AMNH 3015, put on display, but never finished this work. Brown brought back from the Cloverly Formation the skeleton of a smaller theropod with seemingly oversized teeth that he informally named "Megadontosaurus". John Ostrom, reviewing this material decades later, realized that the teeth came from Deinonychus, but the skeleton came from a completely different animal. He named this skeleton Microvenator.
A little more than thirty years later, in August 1964, paleontologist John Ostrom led an expedition from Yale's Peabody Museum of Natural History which discovered more skeletal material near Bridger. Expeditions during the following two summers uncovered more than 1,000 bones, among which were at least three individuals. Since the association between the various recovered bones was weak, making the exact number of individual animals represented impossible to determine properly, the type specimen (YPM 5205) of Deinonychus was restricted to the complete left foot and partial right foot that definitely belonged to the same individual. The remaining specimens were catalogued in fifty separate entries at Yale's Peabody Museum although they could have been from as few as three individuals.
Later study by Ostrom and Grant E. Meyer analyzed their own material as well as Brown's "Daptosaurus" in detail and found them to be the same species. Ostrom first published his findings in February 1969, giving all the referred remains the new name of Deinonychus antirrhopus. The specific name "antirrhopus", from Greek ἀντίρροπος, means "counterbalancing" and refers to the likely purpose of a stiffened tail. In July 1969, Ostrom published a very extensive monograph on Deinonychus.
Though a myriad of bones was available by 1969, many important ones were missing or hard to interpret. There were few postorbital skull elements, no femurs, no sacrum, no furcula or sternum, missing vertebrae, and (Ostrom thought) only a tiny fragment of a coracoid. Ostrom's skeletal reconstruction of Deinonychus included a very unusual pelvic bone—a pubis that was trapezoidal and flat, unlike that of other theropods, but which was the same length as the ischium and which was found right next to it.
Further findings
In 1974, Ostrom published another monograph on the shoulder of Deinonychus in which he realized that the pubis that he had described was actually a coracoid—a shoulder element. In that same year, another specimen of Deinonychus, MCZ 4371, was discovered and excavated in Montana by Steven Orzack during a Harvard University expedition headed by Farish Jenkins. This discovery added several new elements: well preserved femora, pubes, a sacrum, and better ilia, as well as elements of the pes and metatarsus. Ostrom described this specimen and revised his skeletal restoration of Deinonychus. This time it showed the very long pubis, and Ostrom began to suspect that they may have even been a little retroverted like those of birds.
A skeleton of Deinonychus, including bones from the original (and most complete) AMNH 3015 specimen, can be seen on display at the American Museum of Natural History, with another specimen (MCZ 4371) on display at the Museum of Comparative Zoology at Harvard University. The American Museum and Harvard specimens are from a different locality than the Yale specimens. Even these two skeletal mounts are lacking elements, including the sterna, sternal ribs, furcula, and gastralia.
Even after all Ostrom's work, several small blocks of lime-encased material remained unprepared in storage at the American Museum. These consisted mostly of isolated bones and bone fragments, including the original matrix, or surrounding rock in which the specimens were initially buried. An examination of these unprepared blocks by Gerald Grellet-Tinner and Peter Makovicky in 2000 revealed an interesting, overlooked feature. Several long, thin bones identified on the blocks as ossified tendons (structures that helped stiffen the tail of Deinonychus) turned out to actually represent gastralia (abdominal ribs). More significantly, a large number of previously unnoticed fossilized eggshells were discovered in the rock matrix that had surrounded the original Deinonychus specimen.
In a subsequent, more detailed report, on the eggshells, Grellet-Tinner and Makovicky concluded that the egg almost certainly belonged to Deinonychus, representing the first dromaeosaurid egg to be identified. Moreover, the external surface of one eggshell was found in close contact with the gastralia suggesting that Deinonychus might have brooded its eggs. This implies that Deinonychus used body heat transfer as a mechanism for egg incubation, and indicates an endothermy similar to modern birds. Further study by Gregory Erickson and colleagues finds that this individual was 13 or 14 years old at death and its growth had plateaued. Unlike other theropods in their study of specimens found associated with eggs or nests, it had finished growing at the time of its death.
Implications
Ostrom's description of Deinonychus in 1969 has been described as the most important single discovery of dinosaur paleontology in the mid-20th century. The discovery of this clearly active, agile predator did much to change the scientific (and popular) conception of dinosaurs and opened the door to speculation that some dinosaurs may have been warm-blooded. This development has been termed the dinosaur renaissance. Several years later, Ostrom noted similarities between the forefeet of Deinonychus and that of birds, an observation which led him to revive the hypothesis that birds are descended from dinosaurs. Forty years later, this idea is almost universally accepted.
Because of its extremely bird-like anatomy and close relationship to other dromaeosaurids, paleontologists hypothesize that Deinonychus was probably covered in feathers. Clear fossil evidence of modern avian-style feathers exists for several related dromaeosaurids, including Velociraptor and Microraptor, though no direct evidence is yet known for Deinonychus itself. When conducting studies of such areas as the range of motion in the forelimbs, paleontologists like Phil Senter have taken the likely presence of wing feathers (as present in all known dromaeosaurs with skin impressions) into consideration.
Description
Based on the few fully mature specimens, Paul estimated that Deinonychus could reach in length, with a skull length of , a hip height of and a body mass of . Campione and his colleagues proposed a higher mass estimate of based on femur and humerus circumference. The skull was equipped with powerful jaws lined with around seventy curved, blade-like teeth. Studies of the skull have progressed a great deal over the decades. Ostrom reconstructed the partial, imperfectly preserved skulls that he had as triangular, broad, and fairly similar to Allosaurus. Additional Deinonychus skull material and closely related species found with good three-dimensional preservation show that the palate was more vaulted than Ostrom thought, making the snout far narrower, while the jugals flared broadly, giving greater stereoscopic vision. The skull of Deinonychus was different from that of Velociraptor, however, in that it had a more robust skull roof, like that of Dromaeosaurus, and did not have the depressed nasals of Velociraptor. Both the skull and the lower jaw had fenestrae (skull openings) which reduced the weight of the skull. In Deinonychus, the antorbital fenestra, a skull opening between the eye and nostril, was particularly large.
Deinonychus possessed large "hands" (manus) with three claws on each forelimb. The first digit was shortest and the second was longest. Each hind foot bore a sickle-shaped claw on the second digit, which was probably used during predation.
No skin impressions have ever been found in association with fossils of Deinonychus. Nonetheless, the evidence suggests that the Dromaeosauridae, including Deinonychus, had feathers. The genus Microraptor is both older geologically and more primitive phylogenetically than Deinonychus, and within the same family. Multiple fossils of Microraptor preserve pennaceous, vaned feathers like those of modern birds on the arms, legs, and tail, along with covert and contour feathers. Velociraptor is geologically younger than Deinonychus, but even more closely related. A specimen of Velociraptor has been found with quill knobs on the ulna. Quill knobs are where the follicular ligaments attached, and are a direct indicator of feathers of modern aspect.
Classification
Deinonychus antirrhopus is one of the best known dromaeosaurid species, and also a close relative of the smaller Velociraptor, which is found in younger, Late Cretaceous-age rock formations in Central Asia. The clade they form is called Velociraptorinae. The subfamily name Velociraptorinae was first coined by Rinchen Barsbold in 1983 and originally contained the single genus Velociraptor. Later, Phil Currie included most of the dromaeosaurids. Two Late Cretaceous genera, Tsaagan from Mongolia and the North American Saurornitholestes, may also be close relatives, but the latter is poorly known and hard to classify. Velociraptor and its allies are regarded as using their claws more than their skulls as killing tools, as opposed to dromaeosaurines like Dromaeosaurus, which have stockier skulls. Phylogenetically, the dromaeosaurids represent one of the non-avialan dinosaur groups most closely related to birds. The cladogram below follows a 2015 analysis by paleontologists Robert DePalma, David Burnham, Larry Martin, Peter Larson, and Robert Bakker, using updated data from the Theropod Working Group. This study currently classifies Deinonychus as a member of the Dromaeosaurinae.
A 2021 study of the dromaeosaurid Kansaignathus recovered Deinonychus as a velociraptorine rather than a dromaeosaurine, with Kansaignathus being an intermediate basal form more advanced than Deinonychus but more primitive than Velociraptor. The cladogram below showcases these newly described relationships:
A study in 2022, however, reclassified Deinonychus as a basal member of Dromaeosaurinae again.
Paleobiology
Predatory behavior
In 2009, Manning and colleagues interpreted dromaeosaur claw tips as functioning as a puncture and gripping element, whereas the expanded rear portion of the claw transferred load stress through the structure. They argue that the anatomy, form, and function of the foot's recurved digit II and hand claws of dromaeosaurs support a prey capture/grappling/climbing function. The team also suggest that a ratchet-like ‘‘locking’’ ligament might have provided an energy-efficient way for dromaeosaurs to hook their recurved digit II claw into prey. Shifting body weight locked the claws passively, allowing their jaws to dispatch prey. They conclude that the enhanced climbing abilities of dromaeosaur dinosaurs supported a scansorial (climbing) phase in the evolution of flight.
In 2011, Denver Fowler and colleagues suggested a new method by which Deinonychus and other dromaeosaurs may have captured and restrained prey. This model, known as the "raptor prey restraint" (RPR) model of predation, proposes that Deinonychus killed its prey in a manner very similar to extant accipitrid birds of prey: by leaping onto its quarry, pinning it under its body weight, and gripping it tightly with the large, sickle-shaped claws. Like accipitrids, the dromaeosaur would then begin to feed on the animal while still alive, until it eventually died from blood loss and organ failure. This proposal is based primarily on comparisons between the morphology and proportions of the feet and legs of dromaeosaurs to several groups of extant birds of prey with known predatory behaviors. Fowler found that the feet and legs of dromaeosaurs most closely resemble those of eagles and hawks, especially in terms of having an enlarged second claw and a similar range of grasping motion. However, the short metatarsus and foot strength would have been more similar to that of owls. The RPR method of predation would be consistent with other aspects of Deinonychus'''s anatomy, such as their unusual jaw and arm morphology. The arms were likely covered in long feathers, and may have been used as flapping stabilizers for balance while atop struggling prey, along with the stiff counterbalancing tail. Its jaws, thought to have had a comparatively weak bite force, might be used for saw motion bites, like the modern Komodo dragon which also has a weak bite force, to finish off its prey if its kicks were not powerful enough. In 2020, Mark Powers and colleagues analyzed the snout morphology of dromaeosaurids from North America and Asia, their findings suggest that the maxilla of Deinonychus was short and deep, resembling that of short-snouted canids, suggesting that Deinonychus specialized on larger prey.
Bite force
Bite force estimates for Deinonychus were first produced in 2005, based on reconstructed jaw musculature. This study concluded that Deinonychus likely had a maximum bite force only 15% that of the modern American alligator. A 2010 study by Paul Gignac and colleagues attempted to estimate the bite force based directly on newly discovered Deinonychus tooth puncture marks in the bones of a Tenontosaurus. These puncture marks came from a large individual, and provided the first evidence that large Deinonychus could bite through bone. Using the tooth marks, Gignac's team were able to determine that the bite force of Deinonychus was significantly higher than earlier studies had estimated by biomechanical studies alone. They found the bite force of Deinonychus to be between 4,100 and 8,200 newtons, greater than living carnivorous mammals including the hyena, and equivalent to a similarly-sized alligator.
However, this estimate has come into question, as it was based on bite marks rather than a Deinonychus skull. A recent 2022 study used a Deinonychus skull for their estimate and calculated .
Gignac and colleagues also noted, however, that bone puncture marks from Deinonychus are relatively rare, and unlike larger theropods with many known puncture marks like Tyrannosaurus, Deinonychus probably did not frequently bite through or eat bone. Instead, they probably used their strong bite force for defense or to capture prey, rather than for feeding.
A 2024 study by Tse, Miller, and Pittman et al., focusing on the skull morphology and bite forces of various dromaeosaurids discovered that Deinonychus, the largest taxon examined, had a skull that was well adapted to hunting of large vertebrates and delivering powerful bites to prey alongside Dromaeosaurus, to which it was compared. In this study, Deinonychus represented the most extreme specializations compared to other dromaeosaurids when it came to its adaptations. The same study also revealed that Deinonychus' skull was less resistant to bite forces than that of Velociraptor, which apparently was engaging in more scavenging behavior, suggesting high bite force resistance was more common in dromaeosaurid taxa that were obtaining food through scavenging more than engaging in active predation. It is also suggested in these findings that Deinonychus may have fed by using neck-driven pullback movements to dismember carcasses when feeding, akin to modern varanid lizards.
Limb function
Despite being the most distinctive feature of Deinonychus, the shape and curvature of the sickle claw varies between specimens. The type specimen described by Ostrom in 1969 has a strongly curved sickle claw, while a newer specimen described in 1976 had a claw with much weaker curvature, more similar in profile with the 'normal' claws on the remaining toes. Ostrom suggested that this difference in the size and shape of the sickle claws could be due to individual, sexual, or age-related variation, but admitted he could not be sure.
There is anatomical and trackway evidence that this talon was held up off the ground while the dinosaur walked on the third and fourth toes.
Ostrom suggested that Deinonychus could kick with the sickle claw to cut and slash at its prey. Some researchers even suggested that the talon was used to disembowel large ceratopsian dinosaurs. Other studies have suggested that the sickle claws were not used to slash but rather to deliver small stabs to the victim. In 2005, Manning and colleagues ran tests on a robotic replica that precisely matched the anatomy of Deinonychus and Velociraptor, and used hydraulic rams to make the robot strike a pig carcass. In these tests, the talons made only shallow punctures and could not cut or slash. The authors suggested that the talons would have been more effective in climbing than in dealing killing blows. In 2009, Manning and colleagues undertook additional analysis dromaeosaur claw function, using a numerical modelling approach to generate a 3D finite element stress/ strain map of a Velociraptor hand claw. They went on to quantitatively evaluate the mechanical behavior of dromaeosaur claws and their function. They state that dromaeosaur claws were well-adapted for climbing as they were resistant to forces acting in a single (longitudinal) plane, due to gravity.
Ostrom compared Deinonychus to the ostrich and cassowary. He noted that the bird species can inflict serious injury with the large claw on the second toe. The cassowary has claws up to long. Ostrom cited Gilliard (1958) in saying that they can sever an arm or disembowel a man. Kofron (1999 and 2003) studied 241 documented cassowary attacks and found that one human and two dogs had been killed, but no evidence that cassowaries can disembowel or dismember other animals. Cassowaries use their claws to defend themselves, to attack threatening animals, and in agonistic displays such as the Bowed Threat Display. The seriema also has an enlarged second toe claw, and uses it to tear apart small prey items for swallowing. In 2011, a study suggested that the sickle claw would likely have been used to pin down prey while biting it, rather than as a slashing weapon.
Biomechanical studies by Ken Carpenter in 2002 confirmed that the most likely function of the forelimbs in predation was grasping, as their great lengths would have permitted longer reach than for most other theropods. The rather large and elongated coracoid, indicating powerful muscles in the forelimbs, further strengthened this interpretation. Carpenter's biomechanical studies using bone casts also showed that Deinonychus could not fold its arms against its body like a bird ("avian folding"), contrary to what was inferred from the earlier 1985 descriptions by Jacques Gauthier and Gregory S. Paul in 1988.
Studies by Phil Senter in 2006 indicated that Deinonychus forelimbs could be used not only for grasping, but also for clutching objects towards the chest. If Deinonychus had feathered fingers and wings, the feathers would have limited the range of motion of the forelimbs to some degree. For example, when Deinonychus extended its arm forward, the 'palm' of the hand automatically rotated to an upward-facing position. This would have caused one wing to block the other if both forelimbs were extended at the same time, leading Senter to conclude that clutching objects to the chest would have only been accomplished with one arm at a time. The function of the fingers would also have been limited by feathers; for example, only the third digit of the hand could have been employed in activities such as probing crevices for small prey items, and only in a position perpendicular to the main wing. Alan Gishlick, in a 2001 study of Deinonychus forelimb mechanics, found that even if large wing feathers were present, the grasping ability of the hand would not have been significantly hindered; rather, grasping would have been accomplished perpendicular to the wing, and objects likely would have been held by both hands simultaneously in a "bear hug" fashion, findings which have been supported by the later forelimb studies by Carpenter and Senter. In a 2001 study conducted by Bruce Rothschild and other paleontologists, 43 hand bones and 52 foot bones referred to Deinonychus were examined for signs of stress fracture; none were found. The second phalanx of the second toe in the specimen YPM 5205 has a healed fracture.
Parsons and Parsons have shown that juvenile and sub-adult specimens of Deinonychus display some morphological differences from the adults. For instance, the arms of the younger specimens were proportionally longer than those of the adults, a possible indication of difference in behavior between young and adults. Another example of this could be the function of the pedal claws. Parsons and Parsons have suggested that the claw curvature (which Ostrom [1976] had already shown was different between specimens) maybe was greater for juvenile Deinonychus, as this could help it climb in trees, and that the claws became straighter as the animal became older and started to live solely on the ground. This was based on the hypothesis that some small dromaeosaurids used their pedal claws for climbing.
Locomotion
Dromaeosaurids, especially Deinonychus, are often depicted as unusually fast-running animals in the popular media, and Ostrom himself speculated that Deinonychus was fleet-footed in his original description. However, when first described, a complete leg of Deinonychus had not been found, and Ostrom's speculation about the length of the femur (upper leg bone) later proved to have been an overestimate. In a later study, Ostrom noted that the ratio of the femur to the tibia (lower leg bone) is not as important in determining speed as the relative length of the foot and lower leg. In modern fleet-footed birds, like the ostrich, the foot-tibia ratio is .95. In unusually fast-running dinosaurs, like Struthiomimus, the ratio is .68, but in Deinonychus the ratio is .48. Ostrom stated that the "only reasonable conclusion" is that Deinonychus, while far from slow-moving, was not particularly fast compared to other dinosaurs, and certainly not as fast as modern flightless birds.
The low foot to lower leg ratio in Deinonychus is due partly to an unusually short metatarsus (upper foot bones). The ratio is actually larger in smaller individuals than in larger ones. Ostrom suggested that the short metatarsus may be related to the function of the sickle claw, and used the fact that it appears to get shorter as individuals aged as support for this. He interpreted all these features—the short second toe with enlarged claw, short metatarsus, etc.—as support for the use of the hind leg as an offensive weapon, where the sickle claw would strike downwards and backwards, and the leg pulled back and down at the same time, slashing and tearing at the prey. Ostrom suggested that the short metatarsus reduced overall stress on the leg bones during such an attack, and interpreted the unusual arrangement of muscle attachments in the Deinonychus leg as support for his idea that a different set of muscles was used in the predatory stroke than in walking or running. Therefore, Ostrom concluded that the legs of Deinonychus represented a balance between running adaptations needed for an agile predator, and stress-reducing features to compensate for its unique foot weapon.
In his 1981 study of Canadian dinosaur footprints, Richard Kool produced rough walking speed estimates based on several trackways made by different species in the Gething Formation of British Columbia. Kool estimated one of these trackways, representing the ichnospecies Irenichnites gracilis (which may have been made by Deinonychus), to have a walking speed of 10.1 kilometers per hour (6 miles per hour).
In a 2015 paper, it was reported after further analysis of immature fossils that the open and mobile nature of the shoulder joint might have meant that young Deinonychus were capable of some form of flight.
Eggs
The identification, in 2000, of a probable Deinonychus egg associated with one of the original specimens allowed comparison with other theropod dinosaurs in terms of egg structure, nesting, and reproduction. In their 2006 examination of the specimen, Grellet-Tinner and Makovicky examined the possibility that the dromaeosaurid had been feeding on the egg, or that the egg fragments had been associated with the Deinonychus skeleton by coincidence. They dismissed the idea that the egg had been a meal for the theropod, noting that the fragments were sandwiched between the belly ribs and forelimb bones, making it impossible that they represented contents of the animal's stomach. In addition, the manner in which the egg had been crushed and fragmented indicated that it had been intact at the time of burial, and was broken by the fossilization process. The idea that the egg was randomly associated with the dinosaur was also found to be unlikely; the bones surrounding the egg had not been scattered or disarticulated, but remained fairly intact relative to their positions in life, indicating that the area around and including the egg was not disturbed during preservation. The fact that these bones were belly ribs (gastralia), which are very rarely found articulated, supported this interpretation. All the evidence, according to Grellet-Tinner and Makovicky, indicates that the egg was intact beneath the body of the Deinonychus when it was buried. It is possible that this represents brooding or nesting behavior in Deinonychus similar to that seen in the related troodontids and oviraptorids, or that the egg was in fact inside the oviduct when the animal died.
Examination of the Deinonychus egg's microstructure confirms that it belonged to a theropod, since it shares characteristics with other known theropod eggs and shows dissimilarities with ornithischian and sauropod eggs. Compared to other maniraptoran theropods, the egg of Deinonychus is more similar to those of oviraptorids than to those of troodontids, despite studies that show the latter are more closely related to dromaeosaurids like Deinonychus. While the egg was too badly crushed to accurately determine its size, Grellet-Tinner and Makovicky estimated a diameter of about based on the width of the pelvic canal through which the egg had to have passed. This size is similar to the diameter of the largest Citipati (an oviraptorid) eggs; Citipati and Deinonychus also shared the same overall body size, supporting this estimate. Additionally, the thicknesses of Citipati and Deinonychus eggshells are almost identical, and since shell thickness correlates with egg volume, this further supports the idea that the eggs of these two animals were about the same size.
A study published in November 2018 by Norell, Yang and Wiemann et al., indicates that Deinonychus laid blue eggs, likely to camouflage them as well as creating open nests. The study also indicates that Deinonychus and other dinosaurs that created open nests likely represent an origin of color in modern bird eggs as an adaptation both for recognition and camouflage against predators.
Social behavior
Whether or not Deinonychus could’ve practiced cooperative pack hunting is still a hot debate.Deinonychus teeth found in association with fossils of the ornithopod dinosaur Tenontosaurus are quite common in the Cloverly Formation. Two quarries have been discovered that preserve fairly complete Deinonychus fossils near Tenontosaurus fossils. The first, the Yale quarry in the Cloverly of Montana, includes numerous teeth, four adult Deinonychus and one juvenile Deinonychus. The association of this number of Deinonychus skeletons in a single quarry suggests that Deinonychus may have fed on that animal, and perhaps hunted it. Ostrom and Maxwell have even used this information to speculate that Deinonychus might have lived and hunted in packs. The second such quarry is from the Antlers Formation of Oklahoma. The site contains six partial skeletons of Tenontosaurus of various sizes, along with one partial skeleton and many teeth of Deinonychus. One tenontosaur humerus even bears what might be Deinonychus tooth marks. Brinkman et al. (1998) point out that Deinonychus had an adult mass of , whereas adult tenontosaurs were 1–4 metric tons. A solitary Deinonychus could not kill an adult tenontosaur, suggesting that pack hunting is possible.
A 2007 study by Roach and Brinkman has called into question the cooperative pack hunting behavior of Deinonychus, based on what is known of modern carnivore hunting and the taphonomy of tenontosaur sites. Modern archosaurs (birds and crocodiles) and Komodo dragons typically display little cooperative hunting; instead, they are usually either solitary hunters, or are drawn to previously killed carcasses, where much conflict occurs between individuals of the same species. For example, in situations where groups of Komodo dragons are eating together, the largest individuals eat first and will attack smaller Komodos that attempt to feed; if the smaller animal is killed, it is cannibalized. When this information is applied to the tenontosaur sites, it appears that what is found is consistent with Deinonychus having a Komodo or crocodile-like feeding strategy. Deinonychus skeletal remains found at these sites are from subadults, with missing parts consistent with having been eaten by other Deinonychus.
On the other hand, a paper by Li et al. describes track sites with similar foot spacing and parallel trackways, implying gregarious packing behavior instead of uncoordinated feeding behavior. Contrary to the claim crocodilians do not hunt cooperatively, they have actually been observed to hunt cooperatively, meaning that the notion of infighting, competition for food and cannibalism ruling out cooperative feeding may actually be a false dichotomy.
In 2020, study of carbon isotopes in Deinonychus teeth suggests precociality in the genus. The isotopes found for different aged specimens indicate that adults and juveniles had different diets across the various age groups. The data suggests that Deinonychus had a more typical archosaurian set of life stages, with any parental feeding of young ending before the young were large enough to share the typical adult diet. The examinations also have been stated to indicate a lack of mammal-like pack hunting. Despite this, the authors stated that this doesn’t exclude the possibility of gregarious and probably practiced ratite-like parental care, instead of a complete agnostic relationship as seen in Komodo dragons, due to the lack of spatial separation of juveniles and adults.
Paleoenvironment
Geological evidence suggests that Deinonychus inhabited a floodplain or swamplike habitat. The paleoenvironment of both the upper Cloverly Formation and the Antlers Formation, in which remains of Deinonychus have been found, consisted of tropical or sub-tropical forests, deltas and lagoons, perhaps similar to the environment of modern-day Louisiana.
Other animals Deinonychus shared its world with include herbivorous dinosaurs such as the nodosaurid Sauropelta and the ornithopods Zephyrosaurus and Tenontosaurus. In Oklahoma, the ecosystem of Deinonychus also included the large theropod Acrocanthosaurus, the huge sauropod Sauroposeidon, the crocodilians Goniopholis and Paluxysuchus, and the gar Lepisosteus. If the teeth found in Maryland are those of Deinonychus, then its contemporaries would include the sauropod Astrodon and the poorly-known nodosaur Priconodon. The middle portion of the Cloverly Formation ranges in age from 115 ± 10 Ma near the base to 108.5 ± 0.2 Ma near the top.
Cultural significanceDeinonychus were featured prominently in Harry Adam Knight's novel Carnosaur and its film adaption, and Michael Crichton's novels Jurassic Park and The Lost World and their film adaptations, directed by Steven Spielberg. Crichton ultimately chose to use the name Velociraptor for these dinosaurs, rather than Deinonychus. Crichton had met with John Ostrom several times during the writing process to discuss details of the possible range of behaviors and life appearance of Deinonychus. Crichton at one point apologetically told Ostrom that he had decided to use the name Velociraptor in place of Deinonychus for his book, because he felt the former name was "more dramatic". Despite this, according to Ostrom, Crichton stated that the Velociraptor of the novel was based on Deinonychus in almost every detail, and that only the name had been changed.
The Jurassic Park filmmakers followed suit, designing the film's models based almost entirely on Deinonychus instead of the actual Velociraptor, and they reportedly requested all of Ostrom's published papers on Deinonychus during production. As a result, they portrayed the film's dinosaurs with the size, proportions, and snout shape of Deinonychus. The Utahraptor is commonly considered to be a close match to the film's dinosaurs, which are much larger than either Deinonychus or Velociraptor'' were in life.
| Biology and health sciences | Theropods | Animals |
236936 | https://en.wikipedia.org/wiki/Rufous%20hummingbird | Rufous hummingbird | The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying during their migratory transits. It is one of nine species in the genus Selasphorus.
Taxonomy
The rufous hummingbird was formally described in 1788 by the German naturalist Johann Friedrich Gmelin in his revised and expanded edition of Carl Linnaeus's Systema Naturae. He placed it with all the other hummingbirds in the genus Trochilus and coined the binomial name Trochilus rufus. Gmelin based his description on the ruff-necked hummingbird described by John Latham in 1782 and the ruffed honeysucker described by Thomas Pennant in 1785.
The type locality given by Gmelin was Nootka Sound on the west coast of Vancouver Island in western Canada, although breeding was estimated to occur in northwestern North America and wintering in westcentral Mexico. The rufous hummingbird is now placed with eight other species in the genus Selasphorus that was introduced in 1832 by the English naturalist William Swainson. The genus name combines the Ancient Greek selas meaning "light" or "flame" with -phoros meaning "-carrying". The specific epithet rufus is the Latin word for "red". The species is considered as monotypic: no subspecies are recognized.
Description
The adult male has a white breast, rufous face, flanks and tail and an iridescent orange-red throat patch or gorget. Some males have some green on their back and/or crown. The female has green, white, and some iridescent orange feathers in the center of the throat, and a dark tail with white tips and rufous base.
The female is slightly larger than the male. Females and the rare green-backed males are extremely difficult to differentiate from Allen's hummingbird. The typical "notched" shape of the second rectrix (R2) is considered an important field mark to distinguish the adult male rufous hummingbird from the adult male Allen's hummingbird. This is a typical-sized hummingbird, being a very small bird. It weighs , measures long and spans across the wings.
Distribution and habitat
Western rufous hummingbirds migrate through the Rocky Mountains and nearby lowlands from May to September to take advantage of the wildflower season. They may stay in one local region for the entire summer, in which case the migrants (like breeding birds) often aggressively take-over and defend feeding locations. Most individuals winter in wooded areas in the Mexican state of Guerrero, traveling over by an overland route from their nearest summer home – a prodigious journey for a bird weighing only .
Adult male rufous hummingbirds tend to migrate slightly earlier than females or young. Since juveniles and females are essentially indistinguishable from Allen's hummingbird, unless confirmed by close inspection, eastern rufous migrants may be classified as "rufous/Allen's hummingbirds".
Behavior and ecology
Food and feeding
They feed on nectar from flowers using a long extendable tongue or catch insects on the wing. These birds require frequent feeding while active during the day and become torpid at night to conserve energy. Because of their small size, they are vulnerable to insect-eating birds and animals.
Hovering and sexual dimorphism
A study that used digital imaging velocimetry to look at wing movements found that the rufous hummingbird supports its body weight during hovering primarily by wing downstrokes (75% of lift) rather than by upstrokes (25% of lift). When hovering during fasting, rufous hummingbirds oxidize fatty acids to support metabolism and food energy requirements, but can rapidly switch to carbohydrate metabolism (within 40 minutes) after feeding on flower nectar.
Both males and females are territorial; however, they defend different types of territories. The more aggressive males fight to defend areas with dense flowers, pushing females into areas with more sparsely populated flowers. Males generally have shorter wings than females, therefore their metabolic cost for hovering is higher. This allows males to beat their wings at high frequencies, giving them the ability to chase and attack other birds to defend their territory. The metabolic cost of short wings is compensated for by the fact that these males do not need to waste energy foraging for food, because their defended territory provides plenty of sustenance. Females on the other hand are not given access to the high concentration food sources, because the males fight them off. Therefore, females generally defend larger territories, where flowers are more sparsely populated, forcing them to fly farther between food sources. The metabolic cost of flying farther is compensated for with longer wings providing more efficient flight for females. The differences in wing length demonstrate a distinct sexual dimorphism, allowing each sex to best exploit resources in an area.
Breeding
Their primary breeding habitats are open areas, mountainsides and forest edges in western North America from southern Alaska through British Columbia and the Pacific Northwest to California, nesting further north (Alaska) than any other hummingbird. The female builds a nest in a protected location in a shrub or conifer. Males are promiscuous, mating with several females.
Conservation status
In 2018, the rufous hummingbird was uplisted from least concern to near threatened on the IUCN Red List, on the basis that due to its reliance on insect prey during the wintering season, it will be heavily affected by the global decline in insect populations due to pesticides and intensified agriculture. Due to climate change, many flowers that the rufous hummingbird feeds on during the breeding season have started blooming two weeks prior to the birds' arrival to their breeding locations, which may lead to rufous hummingbirds arriving too late to feed on them.
Gallery
| Biology and health sciences | Apodiformes | Animals |
236952 | https://en.wikipedia.org/wiki/Wallaroo | Wallaroo | Wallaroo is a common name for several species of moderately large macropods, intermediate in size between the kangaroos and the wallabies. The word "wallaroo" is from the Dharug walaru with spelling influenced by the words "kangaroo" and "wallaby".
Description
Wallaroos are typically distinct species from kangaroos and wallabies. An exception is the antilopine wallaroo, which is commonly known as an antilopine kangaroo when large, an antilopine wallaby when small, or an antilopine wallaroo when of intermediate size.
Species
Wallaroo may refer to one of several species in the genus Osphranter:
The common wallaroo or wallaroo (Osphranter robustus) is the best-known species. There are four subspecies of the common wallaroo: the eastern wallaroo (O. r. robustus) and the euro (O. r. erubescens), which are both widespread, and two of more restricted range, one from Barrow Island (the Barrow Island wallaroo (O. r. isabellinus)), the other from the Kimberley region (the Kimberley wallaroo (O. r. woodwardi)).
The black wallaroo (O. bernardus) occupies an area of steep, rocky ground in Arnhem Land. At around in length (excluding tail) it is the smallest wallaroo and the most heavily built. Males weigh , females about . Because it is very wary and is found only in a small area of remote and very rugged country, it is little-known.
The antilopine wallaroo (O. antilopinus), also known as the antilopine kangaroo or the antilopine wallaby, is a creature of the grassy plains and woodlands and is gregarious, unlike other wallaroos which are solitary.
| Biology and health sciences | Diprotodontia | Animals |
236981 | https://en.wikipedia.org/wiki/Amniote | Amniote | Amniotes are tetrapod vertebrate animals belonging to the clade Amniota, a large group that comprises the vast majority of living terrestrial and semiaquatic vertebrates. Amniotes evolved from amphibious stem tetrapod ancestors during the Carboniferous period. Those of Amniota are defined as the smallest crown clade containing humans, the Greek tortoise, and the Nile crocodile.
Amniotes are distinguished from the other living tetrapod clade — the non-amniote lissamphibians (frogs/toads, salamanders/newts and caecilians) — by the development of three extraembryonic membranes (amnion for embryonic protection, chorion for gas exchange, and allantois for metabolic waste disposal or storage), thicker and keratinized skin, costal respiration (breathing by expanding/constricting the rib cage), the presence of adrenocortical and chromaffin tissues as a discrete pair of glands near their kidneys, more complex kidneys, the presence of an astragalus for better extremity range of motion, the diminished role of skin breathing, and the complete loss of metamorphosis, gills, and lateral lines.
The presence of an amniotic buffer, of a water-impermeable skin, and of a robust, air-breathing, respiratory system, allow amniotes to live on land as true terrestrial animals. Amniotes have the ability to procreate without water bodies. Because the amnion and the fluid it secretes shields the embryo from environmental fluctuations, amniotes can reproduce on dry land by either laying shelled eggs (reptiles, birds and monotremes) or nurturing fertilized eggs within the mother (marsupial and placental mammals). This distinguishes amniotes from anamniotes (fish and amphibians) that have to spawn in aquatic environments. Most amniotes still require regular access to drinking water for rehydration, like the semiaquatic amphibians do.
They have better homeostasis in drier environments, and more efficient non-aquatic gas exchange to power terrestrial locomotion, which is facilitated by their astragalus.
Basal amniotes resembled small lizards and evolved from semiaquatic reptiliomorphs during the Carboniferous period. After the Carboniferous rainforest collapse, amniotes spread around Earth's land and became the dominant land vertebrates.
They almost immediately diverged into two groups, namely the sauropsids (including all reptiles and birds) and synapsids (including mammals and extinct ancestors like "pelycosaurs" and therapsids). Among the earliest known crown group amniotes, the oldest known sauropsid is Hylonomus and the oldest known synapsid is Asaphestera, both of which are from Nova Scotia during the Bashkirian age of the Late Carboniferous around .
This basal divergence within Amniota has also been dated by molecular studies at 310–329 Ma, or 312–330 Ma, and by a fossilized birth–death process study at 322–340 Ma.
Etymology
The term amniote comes from the amnion, which derives from Greek ἀμνίον (amnion), which denoted the membrane that surrounds a fetus. The term originally described a bowl in which the blood of sacrificed animals was caught, and derived from ἀμνός (amnos), meaning "lamb".
Description
Zoologists characterize amniotes in part by embryonic development that includes the formation of several extensive membranes, the amnion, chorion, and allantois. Amniotes develop directly into a (typically) terrestrial form with limbs and a thick stratified epithelium (rather than first entering a feeding larval tadpole stage followed by metamorphosis, as amphibians do). In amniotes, the transition from a two-layered periderm to a cornified epithelium is triggered by thyroid hormone during embryonic development, rather than by metamorphosis. The unique embryonic features of amniotes may reflect specializations for eggs to survive drier environments; or the increase in size and yolk content of eggs may have permitted, and coevolved with, direct development of the embryo to a large size.
Adaptation for terrestrial living
Features of amniotes evolved for survival on land include a sturdy but porous leathery or hard eggshell and an allantois that facilitates respiration while providing a reservoir for disposal of wastes. Their kidneys (metanephros) and large intestines are also well-suited to water retention. Most mammals do not lay eggs, but corresponding structures develop inside the placenta.
The ancestors of true amniotes, such as Casineria kiddi, which lived about 340 million years ago, evolved from amphibian reptiliomorphs and resembled small lizards. At the late Devonian mass extinction (360 million years ago), all known tetrapods were essentially aquatic and fish-like. Because the reptiliomorphs were already established 20 million years later when all their fishlike relatives were extinct, it appears they separated from the other tetrapods somewhere during Romer's gap, when the adult tetrapods became fully terrestrial (some forms would later become secondarily aquatic). The modest-sized ancestors of the amniotes laid their eggs in moist places, such as depressions under fallen logs or other suitable places in the Carboniferous swamps and forests; and dry conditions probably do not account for the emergence of the soft shell. Indeed, many modern-day amniotes require moisture to keep their eggs from desiccating. Although some modern amphibians lay eggs on land, all amphibians lack advanced traits like an amnion.
The amniotic egg formed through a series of evolutionary steps. After internal fertilization and the habit of laying eggs in terrestrial environments became a reproduction strategy amongst the amniote ancestors, the next major breakthrough appears to have involved a gradual replacement of the gelatinous coating covering the amphibian egg with a fibrous shell membrane. This allowed the egg to increase both its size and in the rate of gas exchange, permitting a larger, metabolically more active embryo to reach full development before hatching. Further developments, like extraembryonic membranes (amnion, chorion, and allantois) and a calcified shell, were not essential and probably evolved later. It has been suggested that shelled terrestrial eggs without extraembryonic membranes could still not have been more than about 1 cm (0.4-inch) in diameter because of diffusion problems, like the inability to get rid of carbon dioxide if the egg was larger. The combination of small eggs and the absence of a larval stage, where posthatching growth occurs in anamniotic tetrapods before turning into juveniles, would limit the size of the adults. This is supported by the fact that extant squamate species that lay eggs less than 1 cm in diameter have adults whose snout-vent length is less than 10 cm. The only way for the eggs to increase in size would be to develop new internal structures specialized for respiration and for waste products. As this happened, it would also affect how much the juveniles could grow before they reached adulthood.
A similar pattern can be seen in modern amphibians. Frogs that have evolved terrestrial reproduction and direct development have both smaller adults and fewer and larger eggs compared to their relatives that still reproduce in water.
The egg membranes
Fish and amphibian eggs have only one inner membrane, the embryonic membrane. Evolution of the amniote egg required increased exchange of gases and wastes between the embryo and the atmosphere. Structures to permit these traits allowed further adaption that increased the feasible size of amniote eggs and enabled breeding in progressively drier habitats. The increased size of eggs permitted increase in size of offspring and consequently of adults. Further growth for the latter, however, was limited by their position in the terrestrial food-chain, which was restricted to level three and below, with only invertebrates occupying level two. Amniotes would eventually experience adaptive radiations when some species evolved the ability to digest plants and new ecological niches opened up, permitting larger body-size for herbivores, omnivores and predators.
Amniote traits
While the early amniotes resembled their amphibian ancestors in many respects, a key difference was the lack of an otic notch at the back margin of the skull roof. In their ancestors, this notch held a spiracle, an unnecessary structure in an animal without an aquatic larval stage. There are three main lines of amniotes, which may be distinguished by the structure of the skull and in particular the number of holes behind each eye. In anapsids, the ancestral condition, there are none; in synapsids (mammals and their extinct relatives) there is one; and in diapsids (including birds, crocodilians, squamates, and tuataras), there are two. Turtles have secondarily lost their fenestrae, and were traditionally classified as anapsids because of this. Molecular testing firmly places them in the diapsid line of descent.
Post-cranial remains of amniotes can be identified from their Labyrinthodont ancestors by their having at least two pairs of sacral ribs, a sternum in the pectoral girdle (some amniotes have lost it) and an astragalus bone in the ankle.
Definition and classification
Amniota was first formally described by the embryologist Ernst Haeckel in 1866 on the presence of the amnion, hence the name. A problem with this definition is that the trait (apomorphy) in question does not fossilize, and the status of fossil forms has to be inferred from other traits.
Traditional classification
Older classifications of the amniotes traditionally recognised three classes based on major traits and physiology:
Class Reptilia (reptiles)
Subclass Anapsida ("proto-reptiles", possibly including turtles)
Subclass Diapsida (majority of reptiles, progenitors of birds)
Subclass Euryapsida (plesiosaurs, placodonts, and ichthyosaurs)
Subclass Synapsida (stem or proto-mammals, progenitors of mammals)
Class Aves (birds)
Subclass Archaeornithes (reptile-like birds, progenitors of all other birds)
Subclass Enantiornithes (early birds with an alternative shoulder joint)
Subclass Hesperornithes (toothed aquatic flightless birds)
Subclass Ichthyornithes (toothed, but otherwise modern birds)
Subclass Neornithes (all living birds)
Class Mammalia (mammals)
Subclass Prototheria (Monotremata, egg-laying mammals)
Subclass Theria (metatheria (such as marsupials) and eutheria (such as placental mammals))
This rather orderly scheme is the one most commonly found in popular and basic scientific works. It has come under critique from cladistics, as the class Reptilia is paraphyletic—it has given rise to two other classes not included in Reptilia.
Most species described as microsaurs, formerly grouped in the extinct and prehistoric amphibian group lepospondyls, has been placed in the newer clade Recumbirostra, and shares many anatomical features with amniotes which indicates they were amniotes themselves.
Classification into monophyletic taxa
A different approach is adopted by writers who reject paraphyletic groupings. One such classification, by Michael Benton, is presented in simplified form below.
Series Amniota
(Class) Clade Synapsida
A series of unassigned families, corresponding to Pelycosauria †
(Order) Clade Therapsida
Class Mammalia – mammals
(Class) Clade Sauropsida
Subclass Parareptilia †
Family Mesosauridae †
Family Millerettidae †
Family Bolosauridae †
Family Procolophonidae †
Order Pareiasauromorpha
Family Nycteroleteridae †
Family Pareiasauridae †
(Subclass) Clade Eureptilia
Family Captorhinidae †
(Infraclass) Clade Diapsida
Family Araeoscelididae †
Family Weigeltisauridae †
Order Younginiformes †
(Infraclass) Clade Neodiapsida
Order Testudinata
Suborder Testudines – turtles
Infraclass Lepidosauromorpha
Unnamed infrasubclass
Infraclass Ichthyosauria †
Order Thalattosauria †
Superorder Lepidosauriformes
Order Sphenodontida – tuatara
Order Squamata – lizards and snakes
Infrasubclass Sauropterygia †
Order Placodontia †
Order Eosauropterygia †
Suborder Pachypleurosauria †
Suborder Nothosauria †
Order Plesiosauria †
(Infraclass) Clade Archosauromorpha
Family Trilophosauridae †
Order Rhynchosauria †
Order Protorosauria †
Division Archosauriformes
Subdivision Archosauria
Infradivision Crurotarsi
Order Phytosauria†
Family Ornithosuchidae †
Family Stagonolepididae †
Family Rauisuchidae †
Superfamily Poposauroidea †
Superorder Crocodylomorpha
Order Crocodylia – crocodilians
Infradivision Avemetatarsalia
Infrasubdivision Ornithodira
Order Pterosauria †
Family Lagerpetidae †
Family Silesauridae †
(Superorder) Clade Dinosauria – dinosaurs
Order Ornithischia †
(Order) Clade Saurischia
(Suborder) Clade Theropoda – theropods
Class Aves – birds
Phylogenetic classification
With the advent of cladistics, other researchers have attempted to establish new classes, based on phylogeny, but disregarding the physiological and anatomical unity of the groups. Unlike Benton, for example, Jacques Gauthier and colleagues forwarded a definition of Amniota in 1988 as "the most recent common ancestor of extant mammals and reptiles, and all its descendants". As Gauthier makes use of a crown group definition, Amniota has a slightly different content than the biological amniotes as defined by an apomorphy. Though traditionally considered reptiliomorphs, some recent research has recovered diadectomorphs as the sister group to Synapsida within Amniota, based on inner ear anatomy.
Cladogram
The cladogram presented here illustrates the phylogeny (family tree) of amniotes, and follows a simplified version of the relationships found by Laurin & Reisz (1995), with the exception of turtles, which more recent morphological and molecular phylogenetic studies placed firmly within diapsids. The cladogram covers the group as defined under Gauthier's definition.
Following studies in 2022 and 2023, with Drepanosauromorpha placed sister to Weigeltisauridae (Coelurosauravus) in Avicephala based on Senter (2004):
| Biology and health sciences | General classifications_2 | Animals |
237037 | https://en.wikipedia.org/wiki/Cartesian%20closed%20category | Cartesian closed category | In category theory, a category is Cartesian closed if, roughly speaking, any morphism defined on a product of two objects can be naturally identified with a morphism defined on one of the factors. These categories are particularly important in mathematical logic and the theory of programming, in that their internal language is the simply typed lambda calculus. They are generalized by closed monoidal categories, whose internal language, linear type systems, are suitable for both quantum and classical computation.
Etymology
Named after René Descartes (1596–1650), French philosopher, mathematician, and scientist, whose formulation of analytic geometry gave rise to the concept of Cartesian product, which was later generalized to the notion of categorical product.
Definition
The category C is called Cartesian closed iff it satisfies the following three properties:
It has a terminal object.
Any two objects X and Y of C have a product X ×Y in C.
Any two objects Y and Z of C have an exponential ZY in C.
The first two conditions can be combined to the single requirement that any finite (possibly empty) family of objects of C admit a product in C, because of the natural associativity of the categorical product and because the empty product in a category is the terminal object of that category.
The third condition is equivalent to the requirement that the functor – ×Y (i.e. the functor from C to C that maps objects X to X ×Y and morphisms φ to φ×idY) has a right adjoint, usually denoted –Y, for all objects Y in C.
For locally small categories, this can be expressed by the existence of a bijection between the hom-sets
which is natural in X, Y, and Z.
Take care to note that a Cartesian closed category need not have finite limits; only finite products are guaranteed.
If a category has the property that all its slice categories are Cartesian closed, then it is called locally cartesian closed. Note that if C is locally Cartesian closed, it need not actually be Cartesian closed; that happens if and only if C has a terminal object.
Basic constructions
Evaluation
For each object Y, the counit of the exponential adjunction is a natural transformation
called the (internal) evaluation map. More generally, we can construct the partial application map as the composite
In the particular case of the category Set, these reduce to the ordinary operations:
Composition
Evaluating the exponential in one argument at a morphism p : X → Y gives morphisms
corresponding to the operation of composition with p. Alternate notations for the operation pZ include p* and p∘-. Alternate notations for the operation Zp include p* and -∘p.
Evaluation maps can be chained as
the corresponding arrow under the exponential adjunction
is called the (internal) composition map.
In the particular case of the category Set, this is the ordinary composition operation:
Sections
For a morphism p:X → Y, suppose the following pullback square exists, which defines the subobject of XY corresponding to maps whose composite with p is the identity:
where the arrow on the right is pY and the arrow on the bottom corresponds to the identity on Y. Then ΓY(p) is called the object of sections of p. It is often abbreviated as ΓY(X).
If ΓY(p) exists for every morphism p with codomain Y, then it can be assembled into a functor ΓY : C/Y → C on the slice category, which is right adjoint to a variant of the product functor:
The exponential by Y can be expressed in terms of sections:
Examples
Examples of Cartesian closed categories include:
The category Set of all sets, with functions as morphisms, is Cartesian closed. The product X×Y is the Cartesian product of X and Y, and ZY is the set of all functions from Y to Z. The adjointness is expressed by the following fact: the function f : X×Y → Z is naturally identified with the curried function g : X → ZY defined by g(x)(y) = f(x,y) for all x in X and y in Y.
The subcategory of finite sets, with functions as morphisms, is also Cartesian closed for the same reason.
If G is a group, then the category of all G-sets is Cartesian closed. If Y and Z are two G-sets, then ZY is the set of all functions from Y to Z with G action defined by (g.F)(y) = g.F(g−1.y) for all g in G, F:Y → Z and y in Y.
The subcategory of finite G-sets is also Cartesian closed.
The category Cat of all small categories (with functors as morphisms) is Cartesian closed; the exponential CD is given by the functor category consisting of all functors from D to C, with natural transformations as morphisms.
If C is a small category, then the functor category SetC consisting of all covariant functors from C into the category of sets, with natural transformations as morphisms, is Cartesian closed. If F and G are two functors from C to Set, then the exponential FG is the functor whose value on the object X of C is given by the set of all natural transformations from to F.
The earlier example of G-sets can be seen as a special case of functor categories: every group can be considered as a one-object category, and G-sets are nothing but functors from this category to Set
The category of all directed graphs is Cartesian closed; this is a functor category as explained under functor category.
In particular, the category of simplicial sets (which are functors X : Δop → Set) is Cartesian closed.
Even more generally, every elementary topos is Cartesian closed.
In algebraic topology, Cartesian closed categories are particularly easy to work with. Neither the category of topological spaces with continuous maps nor the category of smooth manifolds with smooth maps is Cartesian closed. Substitute categories have therefore been considered: the category of compactly generated Hausdorff spaces is Cartesian closed, as is the category of Frölicher spaces.
In order theory, complete partial orders (cpos) have a natural topology, the Scott topology, whose continuous maps do form a Cartesian closed category (that is, the objects are the cpos, and the morphisms are the Scott continuous maps). Both currying and apply are continuous functions in the Scott topology, and currying, together with apply, provide the adjoint.
A Heyting algebra is a Cartesian closed (bounded) lattice. An important example arises from topological spaces. If X is a topological space, then the open sets in X form the objects of a category O(X) for which there is a unique morphism from U to V if U is a subset of V and no morphism otherwise. This poset is a Cartesian closed category: the "product" of U and V is the intersection of U and V and the exponential UV is the interior of .
A category with a zero object is Cartesian closed if and only if it is equivalent to a category with only one object and one identity morphism. Indeed, if 0 is an initial object and 1 is a final object and we have , then which has only one element.
In particular, any non-trivial category with a zero object, such as an abelian category, is not Cartesian closed. So the category of modules over a ring is not Cartesian closed. However, the functor tensor product with a fixed module does have a right adjoint. The tensor product is not a categorical product, so this does not contradict the above. We obtain instead that the category of modules is monoidal closed.
Examples of locally Cartesian closed categories include:
Every elementary topos is locally Cartesian closed. This example includes Set, FinSet, G-sets for a group G, as well as SetC for small categories C.
The category LH whose objects are topological spaces and whose morphisms are local homeomorphisms is locally Cartesian closed, since LH/X is equivalent to the category of sheaves . However, LH does not have a terminal object, and thus is not Cartesian closed.
If C has pullbacks and for every arrow p : X → Y, the functor p* : C/Y → C/X given by taking pullbacks has a right adjoint, then C is locally Cartesian closed.
If C is locally Cartesian closed, then all of its slice categories C/X are also locally Cartesian closed.
Non-examples of locally Cartesian closed categories include:
Cat is not locally Cartesian closed.
Applications
In Cartesian closed categories, a "function of two variables" (a morphism f : X×Y → Z) can always be represented as a "function of one variable" (the morphism λf : X → ZY). In computer science applications, this is known as currying; it has led to the realization that simply-typed lambda calculus can be interpreted in any Cartesian closed category.
The Curry–Howard–Lambek correspondence provides a deep isomorphism between intuitionistic logic, simply-typed lambda calculus and Cartesian closed categories.
Certain Cartesian closed categories, the topoi, have been proposed as a general setting for mathematics, instead of traditional set theory.
Computer scientist John Backus has advocated a variable-free notation, or Function-level programming, which in retrospect bears some similarity to the internal language of Cartesian closed categories. CAML is more consciously modelled on Cartesian closed categories.
Dependent sum and product
Let C be a locally Cartesian closed category. Then C has all pullbacks, because the pullback of two arrows with codomain Z is given by the product in C/Z.
For every arrow p : X → Y, let P denote the corresponding object of C/Y. Taking pullbacks along p gives a functor p* : C/Y → C/X which has both a left and a right adjoint.
The left adjoint is called the dependent sum and is given by composition .
The right adjoint is called the dependent product.
The exponential by P in C/Y can be expressed in terms of the dependent product by the formula .
The reason for these names is because, when interpreting P as a dependent type , the functors and correspond to the type formations and respectively.
Equational theory
In every Cartesian closed category (using exponential notation), (XY)Z and (XZ)Y are isomorphic for all objects X, Y and Z. We write this as the "equation"
(xy)z = (xz)y.
One may ask what other such equations are valid in all Cartesian closed categories. It turns out that all of them follow logically from the following axioms:
x×(y×z) = (x×y)×z
x×y = y×x
x×1 = x (here 1 denotes the terminal object of C)
1x = 1
x1 = x
(x×y)z = xz×yz
(xy)z = x(y×z)
Bicartesian closed categories
Bicartesian closed categories extend Cartesian closed categories with binary coproducts and an initial object, with products distributing over coproducts. Their equational theory is extended with the following axioms, yielding something similar to Tarski's high school axioms but with a zero:
x + y = y + x
(x + y) + z = x + (y + z)
x×(y + z) = x×y + x×z
x(y + z) = xy×xz
0 + x = x
x×0 = 0
x0 = 1
Note however that the above list is not complete; type isomorphism in the free BCCC is not finitely axiomatizable, and its decidability is still an open problem.
| Mathematics | Category theory | null |
237052 | https://en.wikipedia.org/wiki/Entoprocta | Entoprocta | Entoprocta (), or Kamptozoa , is a phylum of mostly sessile aquatic animals, ranging from long. Mature individuals are goblet-shaped, on relatively long stalks. They have a "crown" of solid tentacles whose cilia generate water currents that draw food particles towards the mouth, and both the mouth and anus lie inside the "crown". The superficially similar Bryozoa (Ectoprocta) have the anus outside a "crown" of hollow tentacles. Most families of entoprocts are colonial, and all but 2 of the 150 species are marine. A few solitary species can move slowly.
Some species eject unfertilized ova into the water, while others keep their ova in brood chambers until they hatch, and some of these species use placenta-like organs to nourish the developing eggs. After hatching, the larvae swim for a short time and then settle on a surface. There they metamorphose, and the larval gut rotates by up to 180°, so that the mouth and anus face upwards. Both colonial and solitary species also reproduce by cloning — solitary species grow clones in the space between the tentacles and then release them when developed, while colonial ones produce new members from the stalks or from corridor-like stolons.
Fossils of entoprocts are very rare, and the earliest specimens that have been identified with confidence date from the Late Jurassic. Most studies from 1996 onwards have regarded entoprocts as members of the Trochozoa, which also includes molluscs and annelids. However, a study in 2008 concluded that entoprocts are closely related to bryozoans. Other studies place them in a clade Tetraneuralia, together with molluscs.
Names
"Entoprocta", coined in 1870, means "anus inside". The alternative name "Kamptozoa", meaning "bent" or "curved" animals, was assigned in 1929. Some authors use "Entoprocta", while others prefer "Kamptozoa".
Description
Most species are colonial, and their members are known as "zooids", since they are not fully independent animals. Zooids are typically long but range from long.
Distinguishing features
Entoprocts are superficially like bryozoans (ectoprocts), as both groups have a "crown" of tentacles whose cilia generate water currents that draw food particles towards the mouth. However, they have different feeding mechanisms and internal anatomy, and bryozoans undergo a metamorphosis from larva to adult that destroys most of the larval tissues; their colonies also have a founder zooid which is different from its "daughters".
Zooids
The body of a mature entoproct zooid has a goblet-like structure with a calyx mounted on a relatively long stalk that attaches to a surface. The rim of the calyx bears a "crown" of 8 to 30 solid tentacles, which are extensions of the body wall. The base of the "crown" of tentacles is surrounded by a membrane that partially covers the tentacles when they retract. The mouth and anus lie on opposite sides of the atrium (space enclosed by the "crown" of tentacles), and both can be closed by sphincter muscles. The gut is U-shaped, curving down towards the base of the calyx, where it broadens to form the stomach. This is lined with a membrane consisting of a single layer of cells, each of which has multiple cilia.
The stalks of colonial species arise from shared attachment plates or from a network of stolons, tubes that run across a surface. In solitary species, the stalk ends in a muscular sucker, or a flexible foot, or is cemented to a surface. The stalk is muscular and produces a characteristic nodding motion. In some species it is segmented. Some solitary species can move, either by creeping on the muscular foot or by somersaulting.
The body wall consists of the epidermis and an external cuticle, which consists mainly of criss-cross collagen fibers. The epidermis contains only a single layer of cells, each of which bears multiple cilia ("hairs") and microvilli (tiny "pleats") that penetrate through the cuticle. The stolons and stalks of colonial species have thicker cuticles, stiffened with chitin.
There is no coelom (internal fluid-filled cavity lined with peritoneum) and the other internal organs are embedded in connective tissue that lies between the stomach and the base of the "crown" of tentacles. The nervous system runs through the connective tissue and just below the epidermis, and is controlled by a pair of ganglia. Nerves run from these to the calyx, tentacles and stalk, and to sense organs in all these areas.
Vegetative functions
A band of cells, each with multiple cilia, runs along the sides of the tentacles, connecting each tentacle to its neighbors, except that there is a gap in the band nearest the anus. A separate band of cilia grows along a groove that runs close to the inner side of the base of the "crown", with a narrow extension up the inner surface of each tentacle. The cilia on the sides of the tentacles create a current that flows into the "crown" at the bases of the tentacles and exits above the center of the "crown". These cilia pass food particles to the cilia on the inner surface of the tentacles, and the inner cilia produce a downward current that drives particles into and around the groove, and then to the mouth.
Entoprocts generally use one or both of: ciliary sieving, in which one band of cilia creates the feeding current and another traps food particles (the "sieve"); and downstream collecting, in which food particles are trapped as they are about to exit past them. In entoprocts, downstream collecting is carried out by the same bands of cilia that generate the current; trochozoan larvae also use downstream collecting, but use a separate set of cilia to trap food particles.
In addition, glands in the tentacles secrete sticky threads that capture large particles. A non-colonial species reported from around the Antarctic Peninsula in 1993 has cells that superficially resemble the cnidocytes of cnidaria, and fire sticky threads. These unusual cells lie around the mouth, and may provide an additional means of capturing prey.
The stomach and intestine are lined with microvilli, which are thought to absorb nutrients. The anus, which opens inside the "crown", ejects solid wastes into the outgoing current after the tentacles have filtered food out of the water; in some families it is raised on a cone above the level of the groove that conducts food to the mouth. Most species have a pair of protonephridia which extract soluble wastes from the internal fluids and eliminate them through pores near the mouth. However, the freshwater species Urnatella gracilis has multiple nephridia in the calyx and stalk.
The zooids absorb oxygen and emit carbon dioxide by diffusion, which works well for small animals.
Reproduction and life cycle
Most species are simultaneous hermaphrodites, but some switch from male to female as they mature, while individuals of some species remain of the same sex all their lives. Individuals have one or two pairs of gonads, placed between the atrium and stomach, and opening into a single gonopore in the atrium. The eggs are thought to be fertilized in the ovaries. Most species release eggs that hatch into planktonic larvae, but a few brood their eggs in the gonopore. Those that brood small eggs nourish them by a placenta-like organ, while larvae of species with larger eggs live on stored yolk. The development of the fertilized egg into a larva follows a typical spiralian pattern: the cells divide by spiral cleavage, and mesoderm develops from a specific cell labelled "4d" in the early embryo. There is no coelom at any stage.
In some species the larva is a trochophore which is planktonic and feeds on floating food particles by using the two bands of cilia round its "equator" to sweep food into the mouth, which uses more cilia to drive them into the stomach, which uses further cilia to expel undigested remains through the anus. In some species of the genera Loxosomella and Loxosoma, the larva produces one or two buds that separate and form new individuals, while the trochophore disintegrates. However, most produce a larva with sensory tufts at the top and front, a pair of pigment-cup ocelli ("little eyes"), a pair of protonephridia, and a large, cilia-bearing foot at the bottom. After settling, the foot and frontal tuft attach to the surface. Larvae of most species undergo a complex metamorphosis, and the internal organs may rotate by up to 180°, so that the mouth and anus both point upwards.
All species can produce clones by budding. Colonial species produce new zooids from the stolon or from the stalks, and can form large colonies in this way. In solitary species, clones form on the floor of the atrium, and are released when their organs are developed.
Taxonomy
The phylum consists of about 150 recognized species, grouped into 4 families:
Evolutionary history
Fossil record
Since entoprocts are small and soft-bodied, fossils have been extremely rare. In 1977, Simon Conway Morris provided the first description of Dinomischus, a sessile animal with calyx, stalk and holdfast, found in Canada's Burgess Shale, which was formed about . Conway Morris regarded this animal as the earliest known entoproct, since its mouth and anus lay inside a ring of structures above the calyx, but noted that these structures were flat and rather stiff, while the tentacles of modern entoprocts are flexible and have a round cross-section.
In 1992 J.A. Todd and P.D. Taylor concluded that Dinomischus was not an entoproct, because it did not have the typical rounded, flexible tentacles, and the fossils showed no other features that clearly resembled those of entoprocts. In their opinion, the earliest fossil entoprocts were specimens they found from Late Jurassic rocks in England. These resemble the modern colonial genus Barentsia in many ways, including: upright zooids linked by a network of stolons encrusting the surface to which the colony is attached; straight stalks joined to the stolons by bulky sockets with transverse bands of wrinkles; overall size and proportions similar to that of modern species of Barentsia.
Another species, Cotyledion tylodes, first described in 1999, was larger than extant entoprocts, reaching 8–56 mm in height, and unlike modern species, was "armored" with sclerites, scale-like structures. C. tylodes did have a similar sessile lifestyle to modern entoprocts. The identified fossils of C. tylodes were found in 520-million-year-old rocks from southern China. This places early entoprocts in the period of the Cambrian explosion.
Family tree
When entoprocts were discovered in the nineteenth century, they and bryozoans (ectoprocts) were regarded as classes within the phylum Bryozoa, because both groups were sessile animals that filter-fed by means of a "crown" of tentacles that bore cilia. However, from 1869 onwards, increasing awareness of differences, including the position of the entoproct anus inside the feeding structure and the difference in the early pattern of division of cells in their embryos, caused scientists to regard the two groups as separate phyla. "Bryozoa" then became just an alternative name for ectoprocts, in which the anus is outside the feeding organ. However, studies by one team in 2007 and 2008 argue for sinking Entoprocta into Bryozoa as a class, and resurrecting Ectoprocta as a name for the currently identified bryozoans.
The consensus of studies from 1996 onwards has been that entoprocts are part of the Trochozoa, a protostome "superphylum" whose members are united in having as their most basic larval form the trochophore type. The trochozoa also include molluscs, annelids, flatworms, nemertines and others. However, scientists disagree about which phylum is mostly closely related to enctoprocts within the trochozoans. An analysis in 2008 re-introduced the pre-1869 meaning of the term "Bryozoa", for a group in which entoprocts and ectoprocts are each other's closest relatives.
Ecology
Distribution and habitats
All species are sessile. While the great majority are marine, two species live in freshwater: Loxosomatoides sirindhornae, reported in 2004 in central Thailand, and Urnatella gracilis, found in all the continents except Antarctica. Colonial species are found in all the oceans, living on rocks, shells, algae and underwater buildings. The solitary species, which are marine, live on other animals that feed by producing water currents, such as sponges, ectoprocts and sessile annelids. The majority of species live no deeper than 50 meters, but a few species are found in the deep ocean.
Interaction with other organisms
Some species of nudibranchs ("sea slugs"), particularly those of the genus Trapania, as well as turbellarian flatworms, prey on entoprocts.
Small colonies of the freshwater entoproct Urnatella gracilis have been found living on the aquatic larvae of the dobsonfly Corydalus cornutus. The ectoprocts gain a means of dispersal, protection from predators and possibly a source of water that is rich in oxygen and nutrients, as colonies often live next to the gills of the larval flies. In the White Sea, the non-colonial entoproct Loxosomella nordgaardi prefers to live attached to bryozoan (ectoproct) colonies, mainly on the edges of colonies or in the "chimneys", gaps by which large bryozoan colonies expel water from which they have sieved food. Observation suggests that both the entoprocts and the bryozoans benefit from the association: each enhances the water flow that the other needs for feeding; and the longer cilia of the entoprocts may help them to capture different food from that caught by the bryozoans, so that the animals do not compete for the same food.
Entoprocts are small and have been little studied by zoologists. Hence it is difficult to determine whether a specimen belongs to a species that already occurs in the same area or is an invader, possibly as a result of human activities.
| Biology and health sciences | Spiralia | Animals |
237199 | https://en.wikipedia.org/wiki/Forklift | Forklift | A forklift (also called industrial truck, lift truck, jitney, hi-lo, fork truck, fork hoist, and forklift truck) is a powered industrial truck used to lift and move materials over short distances. The forklift was developed in the early 20th century by various companies, including Clark, which made transmissions, and Yale & Towne Manufacturing, which made hoists.
Since World War II, the development and use of the forklift truck has greatly expanded worldwide. Forklifts have become an indispensable piece of equipment in manufacturing and warehousing. In 2013, the top 20 manufacturers worldwide posted sales of $30.4 billion, with 944,405 machines sold.
History
Developments from the middle of the 19th century to the early 20th century led to today's modern forklifts. The forerunners of the modern forklift were manually powered hoists to lift loads. In 1906, the Pennsylvania Railroad introduced battery-powered platform trucks for moving luggage at their Altoona, Pennsylvania, station.
World War I saw the development of different types of material-handling equipment in the United Kingdom by Ransomes, Sims & Jefferies of Ipswich. This was in part due to the labor shortages caused by the war. In 1917, Clark in the United States began developing and using powered and lift tractors in its factories. In 1919, the Towmotor Company and, in 1920, Yale & Towne Manufacturing, entered the lift truck market in the United States. Continuing development and expanded use of the forklift continued through the 1920s and 1930s. The introduction of hydraulic power and the development of the first electrically-powered forklifts, along with the use of standardized pallets in the late 1930s, helped to increase the popularity of forklift trucks.
The start of World War II, like World War I before it, spurred the use of forklift trucks in the war effort. Following the war, more efficient methods for storing products in warehouses were implemented, and warehouses needed more maneuverable forklift trucks that could reach greater heights. For example, in 1954, a British company named Lansing Bagnall, now part of KION Group, developed what was claimed to be the first narrow-aisle electric-reach truck. That development changed the design of warehouses leading to narrower aisles and higher load-stacking, which increased storage capability.
During the 1950s and 1960s, operator safety became a concern due to increasing lifting heights and capacities. Safety features such as load backrests and operator cages called overhead guards, began to be added to forklifts. In the late 1980s, ergonomic design began to be incorporated in new forklift models to improve operator comfort, reduce injuries, and increase productivity. During the 1990s, undesirable exhaust emissions from forklift operations began to be tackled, which led to emission standards being implemented for forklift manufacturers in various countries. The introduction of AC power forklifts, along with fuel cell technology, were refinements in continuing forklift development.
General operations
Forklifts are rated for loads at a specified maximum weight and a specified forward center of gravity. This information is located on a nameplate provided by the manufacturer, and loads must not exceed these specifications. In many jurisdictions, it is illegal to alter or remove the nameplate without the permission of the forklift manufacturer.
An important aspect of forklift operation is that it must have rear-wheel steering. While this increases maneuverability in tight cornering situations, it differs from a driver's traditional experience with other wheeled vehicles. While steering, as there is no caster action, it is unnecessary to apply steering force to maintain a constant rate of turn.
Another critical characteristic of the forklift is its instability. The forklift and load must be considered a unit with a continually varying center of gravity with every movement of the load. A forklift must never negotiate a turn at speed with a raised load, where centrifugal and gravitational forces may combine to cause a tip-over accident. The forklift is designed with a load limit for the forks which is decreased with fork elevation and undercutting of the load (i.e., when a load does not butt against the fork "L"). A loading plate for loading reference is usually located on the forklift. A forklift should not be used as a personnel lift without the fitting of specific safety equipment, such as a "cherry picker" or "cage".
Forklifts are a critical element of warehouses and distribution centers. It is considered imperative that these structures be designed to accommodate their efficient and safe movement. In the case of Drive-In/Drive-Thru Racking, a forklift needs to travel inside a storage bay that is multiple pallet positions deep to place or retrieve a pallet. Often, forklift drivers are guided into the bay by guide rails on the floor and the pallet is placed on cantilevered arms or rails. These maneuvers require well-trained operators. Since every pallet requires the truck to enter the storage structure, damage is more common than with other types of storage. In designing a drive-in system, dimensions of the fork truck, including overall width and mast width, must be carefully considered.
Forklift control and capabilities
Forklift hydraulics are controlled either with levers directly manipulating the hydraulic valves or by electrically controlled actuators, using smaller "finger" levers for control. The latter allows forklift designers more freedom in ergonomic design.
Forklift trucks are available in many variations and load capacities. In a typical warehouse setting, most forklifts have load capacities between one and five tons. Larger machines, up to 50 tons lift capacity, are used for lifting heavier loads, including loaded shipping containers.
In addition to a control to raise and lower the forks (also known as blades or tines), the operator can tilt the mast to compensate for a load's tendency to angle the blades toward the ground and risk slipping off the forks. Tilt also provides a limited ability to operate on non-level ground. Skilled forklift operators annually compete in obstacle and timed challenges at regional forklift rodeos.
Design types
Low lift truck
Powered pallet truck, usually electrically powered. Low lift trucks may be operated by a person seated on the machine, or by a person walking alongside, depending on the design.
Stacker
Usually electrically powered. A stacker may be operated by a person seated on the machine, or by a person walking alongside, depending on the design.
Reach truck
Variant on a Rider Stacker forklift, designed for narrow aisles. They are usually electrically powered and often have the highest storage-position lifting ability. A reach truck's forks can extend to reach the load, hence the name. There are two types:
Moving carriage. This consists of an integrated tower mast that's fixed and the forks are mounted on a deployable carriage or pantograph, typically with hydraulic or electro-mechanical actuators or in a scissor formation. They are common in North America and parts of Europe.
Moving mast; which consists of a slender tower mast that is mounted on tracks, allowing the entire assembly to extend or 'reach' using hydraulic or electro-mechanical actuators. It also eliminates a separate fork extender pantograph. They are common in the rest of the world, and generally considered safer.
Counterbalanced forklift
Standard forklifts use a counterweight at the rear of the truck to offset, or counterbalance, the weight of a load carried at the front of the truck. Electric-powered forklifts utilise the weight of the battery as a counterweight and are typically smaller in size as a result.
Sideloader
A sideloader is a piece of materials-handling equipment designed for long loads. The operator's cab is positioned up front on the left-hand side. The area to the right of the cab is called the bed or platform. This contains a central section within it, called the well, where the forks are positioned. The mast and forks reach out to lift the load at its central point and lower it onto the bed. Driving forwards with a load carried lengthways allows long goods, typically timber, steel, concrete or plastics, to be moved through doorways and stored more easily than via conventional forklift trucks.
Order-picking truck
Similar to a reach truck, except the operator either rides in a cage welded to the fork carriage or walks alongside, dependent on design. If the operator is riding in the order picking truck, they wear a specially-designed safety harness to prevent falls. A special toothed grab holds the pallet to the forks. The operator transfers the load onto the pallet one article at a time by hand. This is an efficient way of picking less-than-pallet-load shipments and is popular for use in large distribution centers.
Guided very-narrow-aisle truck
A counterbalance-type sit-down rider electric forklift fitted with a specialized mast assembly. The mast is capable of rotating 90 degrees, and the forks can then advance like on a reach mechanism, to pick up full pallets. Because the forklift does not have to turn, the aisles can be exceptionally narrow, and if wire guidance is fitted in the floor of the building the machine can almost work on its own. Masts on this type of machine tend to be very high. The higher the racking that can be installed, the higher the density the storage can reach. This sort of storage system is popular in cities where land prices are very high, as by building the racking up to three times higher than normal and using these machines, it is possible to stock a much larger amount of material in a building with a relatively small surface area.
Guided very-narrow-aisle order picking truck
Counterbalance-type order-picking truck similar to the guided very-narrow-aisle truck, except that the operator and the controls which operate the machine are in a cage welded to the mast. The operator wears a restraint system to protect them against falls. Otherwise, the description is the same as guided very-narrow-aisle truck.
Truck-mounted forklift
Also referred to as a sod loader. Comes in sit-down center control. Usually has an internal combustion engine. Engines are almost always diesel, but sometimes operate on kerosene, and sometimes use propane injection as a power boost. Some old units are two-stroke compression ignition; most are four-stroke compression ignition. North American engines come with advanced emission control systems. Forklifts built in countries such as Iran or Russia will typically have no emission control systems.
Specialized trucks
At the other end of the spectrum from the counterbalanced forklift trucks are more 'high-end' specialty trucks.
Articulated counterbalance trucks
Articulating counterbalance trucks are designed to be both able to offload trailers and place the load in narrow aisle racking. The central pivot of the truck allows loads to be stored in racking at a right angle to the truck, reducing space requirements (therefore increasing pallet storage density) and eliminating double handling from yard to warehouse.
Frederick L Brown is credited with perfecting the principle of an articulated design in about 1982, receiving an award in 2002 from the UK's Fork Lift Truck Association for Services to the Forklift Industry and the Queen's Award for Innovation in 2003. He took inspiration from the hand pallet truck and found that by reversing the triangle of stability and changing the weight distribution he could solve the issues that had long eluded earlier attempts of articulating a forklift truck. Freddy's patent application referenced specific drive methods, allowing competitors to enter the market by offering alternative methods, but using the same articulating principle.
Guided very narrow aisle trucks
These are rail- or wire-guided and available with lift heights up to 40 feet non-top-tied and 98 feet top-tied. Two forms are available: 'man-down' and 'man-riser', where the operator elevates with the load for increased visibility or for multilevel 'break bulk' order picking. This type of truck, unlike articulated narrow-aisle trucks, requires a high standard of floor flatness.
Marina forklifts
These lifts are found in places like marinas and boat storage facilities. Featuring tall masts, heavy counterweights, and special paint to resist seawater-induced corrosion, they are used to lift boats in and out of storage racks. Once out, the forklift can place the boat into the water, as well as remove it when the boating activity is finished. Marina forklifts are unique among most other forklifts in that they feature a "negative lift" cylinder. This type of cylinder allows the forks to actually descend lower than ground level. Such functionality is necessary, given that the ground upon which the forklift operates is higher than the water level below. Additionally, marina forklifts feature some of the longest forks available, with some up to 24 feet long. The forks are also typically coated in rubber to prevent damage to the hull of the boats that rest on them.
Omnidirectional trucks
Omnidirectional technology (such as Mecanum wheels) can allow a forklift truck to move forward, diagonally and laterally, or in any direction on a surface. An omnidirectional wheel system is able to rotate the truck 360 degrees in its own footprint or strafe sideways without turning the truck cabin.
UL 558 safety-rated trucks
In North America, some internal combustion-powered industrial vehicles carry Underwriters Laboratories ratings that are part of UL 558. Industrial trucks that are considered "safety" carry the designations GS (Gasoline Safety) for gasoline-powered, DS (Diesel Safety) for diesel-powered, LPS (Liquid Propane Safety) for liquified propane or GS/LPS for a dual fuel gasoline/liquified propane-powered truck.
UL 558 is a two-stage safety standard. The basic standards are referred to as G, D, LP, and G/LP. They are considered by Underwriters Laboratories to be the bare minimum required for a lift truck. This is a voluntary standard, and there is no requirement in North America at least by any Government Agency for manufacturers to meet this standard.
The slightly more stringent safety standards GS, DS, LPS, and GP/LPS do provide some minimal protection; however, it is extremely minimal. In the past, Underwriter's Laboratory offered specialty EX and DX safety certifications.
UL 583 safety-rated trucks
UL 583 is the Electric equivalent of UL 558. As with UL 558 it is a two-stage standard.
Explosion-proof trucks
These are for operation in potentially explosive atmospheres found in chemical, petrochemical, pharmaceutical, food and drink, logistics or other fields handling flammable material. Commonly referred to as mainly Miretti or sometimes Pyroban trucks in Europe, they must meet the requirements of the ATEX 94/9/EC Directive if used in Zone 1, 2, 21 or 22 areas and be maintained accordingly.
Automated forklift trucks
In order to decrease work wages, reduce operational cost and improve productivity, automated forklifts have also been developed. Automated forklifts are also called forked automated guided vehicles and are already available for sale.
Methods of propulsion
Internal combustion
Engines may be diesel, kerosene, gasoline, natural gas, butane, or propane-fueled, and may be either two-stroke spark ignition, four-stroke spark ignition (common), two-stroke compression ignition, and four-stroke compression ignition (common). North American Engines come with advanced emission control systems. Forklifts built in countries such as Iran or Russia will typically have no emission control systems.
Liquefied petroleum gas (LPG)
These forklifts use an internal combustion engine modified to run on LPG. The fuel is often stored in a gas cylinder mounted to the rear of the truck. This allows for quick changing of the cylinder once the LPG runs out. LPG trucks are quieter than their diesel counterparts, while offering similar levels of performance.
Battery-electric
Powered by lead-acid batteries or, increasingly, lithium-ion batteries; battery-electric types include: cushion-tire forklifts, scissor lifts, order pickers, stackers, reach trucks and pallet jacks. Electric forklifts are primarily used indoors on flat, even surfaces. Batteries prevent the emission of harmful fumes and are recommended for indoor facilities, such as food-processing and healthcare sectors. Forklifts have also been identified as a promising application for reuse of end-of-life automotive batteries.
Hydrogen fuel cell
Hydrogen fuel cell forklifts are powered by a chemical reaction between hydrogen and oxygen. The reaction is used to generate electricity which can then be stored in a battery and subsequently used to drive electric motors to power the forklift. This method of propulsion produces no local emissions, can be refueled in three minutes, and is often used in refrigerated warehouses as its performance is not degraded by lower temperatures. As of 2024, approximately 50,000 hydrogen forklifts are in operation worldwide (the bulk of which are in the U.S.), as compared with 1.2 million battery electric forklifts that were purchased in 2021.
Counterbalanced forklift components
A typical counterbalanced forklift contains the following components:
Truck framethe base of the machine to which the mast, axles, wheels, counterweight, overhead guard and power source are attached. The frame may have fuel and hydraulic fluid tanks constructed as part of the frame assembly.
Counterweighta mass attached to the rear of the forklift truck frame. The purpose of the counterweight is to counterbalance the load being lifted. In an electric forklift, the large battery may serve as part of the counterweight.
Cabthe area that contains a seat for the operator along with the control pedals, steering wheel, levers, switches and a dashboard containing operator readouts. The cab area may be open-air or enclosed, but it is covered by the cage-like overhead guard assembly. When enclosed, the cab may also be equipped with a cab heater for cold climate countries along with a fan or air conditioning for hot weather.
Overhead guarda metal roof supported by posts at each corner of the cab that helps protect the operator from any falling objects. On some forklifts, the overhead guard is an integrated part of the frame assembly.
Power sourcemay consist of an internal combustion engine that can be powered by LP gas, CNG, gasoline or diesel fuel. Electric forklifts are powered by either a battery or fuel cell that provides power to the electric motors; some fuel cell forklifts may be powered by multiple fuel cells at once. For warehouses and other indoor applications, electric forklifts have the advantage of not producing carbon monoxide.
Tilt cylindershydraulic cylinders that are mounted to the truck frame and the mast. The tilt cylinders pivot the mast backwards or forwards to assist in engaging a load.
Mastthe vertical assembly that does the work of raising and lowering the load. It is made up of interlocking rails that also provide lateral stability. The interlocking rails may either have rollers or bushings as guides. The mast is driven hydraulically, and operated by one or more hydraulic cylinders directly or using chains from the cylinder or cylinders. It may be mounted to the front axle or the frame of the forklift. A 'container mast' variation allows the forks to raise a few meters without increasing the total height of the forklift. This is useful when double-loading pallets into a container or under a mezzanine floor.
Carriagethe component to which the forks or other attachments mount. It is mounted into and moves up and down the mast rails by means of chains or by being directly attached to the hydraulic cylinder. Like the mast, the carriage may have either rollers or bushings to guide it in the interlocking mast rails.
Load backresta rack-like extension that is either bolted or welded to the carriage in order to prevent the load from shifting backward when the carriage is lifted to full height.
Attachmentsmay consist of a mechanism that is attached to the carriage, either permanently or temporarily, to help in the proper engagement of the load. A variety of material-handling attachments are available. Some attachments include sideshifters, slipsheet attachments, carton clamps, multipurpose clamps, rotators, fork positioners, carpet poles, pole handlers, container handlers and roll clamps.
Tireseither solid for indoor use, or pneumatic for outside use.
Attachments
Below is a list of common forklift attachments:
Dimensioning devicesfork truck-mounted dimensioning systems provide dimensions for the cargo to facilitate truck-trailer space utilization and to support warehouse automation systems. The systems normally communicate the dimensions via 802.11 radios. NTEP-certified dimensioning devices are available to support commercial activities that bill based on volume.
Sideshiftera hydraulic attachment that allows the operator to move the tines (forks) and backrest laterally. This allows easier placement of a load without having to reposition the truck.
Rotatorto aid the handling of skids that may have become excessively tilted, and for other specialty material-handling needs, some forklifts are fitted with an attachment that allows the tines to be rotated. This type of attachment may also be used for dumping containers for quick unloading.
Fork positionera hydraulic attachment that moves the tines (forks) together or apart. This removes the need for the operator to manually adjust the tines for different-sized loads.
Roll and barrel clamp attachmenta mechanical or hydraulic attachment used to squeeze the item to be moved. It is used for handling barrels, kegs, or paper rolls. This type of attachment may also have a rotate function. The rotate function would help an operator to insert a vertically-stored paper into the horizontal intake of a printing press for example.
Pole attachmentsin some locations, such as carpet warehouses, a long metal pole is used instead of forks to lift carpet rolls. Similar devices, though much larger, are used to pick up metal coils.
Carton and multipurpose clamp attachmentshydraulic attachments that allow the operator to open and close around a load, squeezing it to pick it up. Products like cartons, boxes and bales can be moved with this type of attachment. With these attachments in use, the forklift truck is sometimes referred to as a clamp truck.
Slip sheet attachment (push-pull)a hydraulic attachment that reaches forward, clamps onto a slip sheet and draws the slip sheet onto wide and thin metal forks for transport. The attachment will push the slip sheet and load off the forks for placement.
Drum handler attachmenta mechanical attachment that slides onto the tines (forks). It usually has a spring-loaded jaw that grips the top lip edge of a drum for transport. Another type grabs around the drum in a manner similar to the roll or barrel attachments.
Man basketa lift platform that slides onto the tines (forks) and is meant for hoisting workers. The man basket has railings to keep the person from falling and brackets for attaching a safety harness. Also, a strap or chain is used to attach the man basket to the carriage of the forklift.
Telescopic forkshydraulic attachments that allow the forklift to operate in warehouses designed for "double-deep stacking", which means that two pallet shelves are placed behind each other without any aisle between them.
Scalesfork truck-mounted scales enable operators to efficiently weigh the pallets they handle without interrupting their workflow by travelling to a platform scale. Scales are available that provide legal-for-trade weights for operations that involve billing by weight. They are easily retrofitted to the truck by hanging on the carriage in the same manner as forks hang on the truck.
Single-double forksforks that in the closed position allow movement of a single pallet or platform but when separated, turn into a set of double forks that allow carrying two pallets side by side. The fork control may have to replace the side-shifter on some lift trucks.
Snow plougha mechanical attachment that allows the forklift operator to easily and quickly move snow. The snow plough can often also be utilised at other times of the year as an attachment to clean up workplaces.
Skipsa mechanical attachment that is fitted to the forklift to allow safe and speedy removal of waste to the appropriate skip or waste compactor. There are two types of skips: the roll-forward type and the bottom-emptying type.
Any attachment on a forklift will reduce its nominal load rating, which is computed with a stock fork carriage and forks. The actual load rating may be significantly lower.
Replacing or adding attachments
It is possible to replace an existing attachment or add one to a lift that does not already have one. Considerations include forklift type, capacity, carriage type, and number of hydraulic functions (that power the attachment features). As mentioned in the preceding section, replacing or adding an attachment may reduce (down-rate) the safe lifting capacity of the forklift truck ( | Technology | Industrial machinery | null |
237248 | https://en.wikipedia.org/wiki/Medical%20ethics | Medical ethics | Medical ethics is an applied branch of ethics which analyzes the practice of clinical medicine and related scientific research. Medical ethics is based on a set of values that professionals can refer to in the case of any confusion or conflict. These values include the respect for autonomy, non-maleficence, beneficence, and justice. Such tenets may allow doctors, care providers, and families to create a treatment plan and work towards the same common goal. These four values are not ranked in order of importance or relevance and they all encompass values pertaining to medical ethics. However, a conflict may arise leading to the need for hierarchy in an ethical system, such that some moral elements overrule others with the purpose of applying the best moral judgement to a difficult medical situation. Medical ethics is particularly relevant in decisions regarding involuntary treatment and involuntary commitment.
There are several codes of conduct. The Hippocratic Oath discusses basic principles for medical professionals. This document dates back to the fifth century BCE. Both The Declaration of Helsinki (1964) and The Nuremberg Code (1947) are two well-known and well respected documents contributing to medical ethics. Other important markings in the history of medical ethics include Roe v. Wade in 1973 and the development of hemodialysis in the 1960s. With hemodialysis now available, but a limited number of dialysis machines to treat patients, an ethical question arose on which patients to treat and which ones not to treat, and which factors to use in making such a decision. More recently, new techniques for gene editing aiming at treating, preventing and curing diseases utilizing gene editing, are raising important moral questions about their applications in medicine and treatments as well as societal impacts on future generations.
As this field continues to develop and change throughout history, the focus remains on fair, balanced, and moral thinking across all cultural and religious backgrounds around the world. The field of medical ethics encompasses both practical application in clinical settings and scholarly work in philosophy, history, and sociology.
Medical ethics encompasses beneficence, autonomy, and justice as they relate to conflicts such as euthanasia, patient confidentiality, informed consent, and conflicts of interest in healthcare. In addition, medical ethics and culture are interconnected as different cultures implement ethical values differently, sometimes placing more emphasis on family values and downplaying the importance of autonomy. This leads to an increasing need for culturally sensitive physicians and ethical committees in hospitals and other healthcare settings.
Medical ethics relationships
Medical ethics defines relationships in the following directions:
a medical worker — a patient;
a medical worker — a healthy person (relatives);
a medical worker — a medical worker.
Medical ethics includes provisions on medical confidentiality, medical errors, iatrogenesis, duties of the doctor and the patient.
Medical ethics is closely related to bioethics, but these are not identical concepts. Since the science of bioethics arose in an evolutionary way in the continuation of the development of medical ethics, it covers a wider range of issues.
Medical ethics is also related to the law. But ethics and law are not identical concepts. More often than not, ethics implies a higher standard of behavior than the law dictates.
History
The term medical ethics first dates back to 1803, when English author and physician Thomas Percival published a document describing the requirements and expectations of medical professionals within medical facilities. The Code of Ethics was then adapted in 1847, relying heavily on Percival's words. Over the years in 1903, 1912, and 1947, revisions have been made to the original document. The practice of medical ethics is widely accepted and practiced throughout the world.
Historically, Western medical ethics may be traced to guidelines on the duty of physicians in antiquity, such as the Hippocratic Oath, and early Christian teachings. The first code of medical ethics, Formula Comitis Archiatrorum, was published in the 5th century, during the reign of the Ostrogothic Christian king Theodoric the Great. In the medieval and early modern period, the field is indebted to Islamic scholarship such as Ishaq ibn Ali al-Ruhawi (who wrote the Conduct of a Physician, the first book dedicated to medical ethics), Avicenna's Canon of Medicine and Muhammad ibn Zakariya ar-Razi (known as Rhazes in the West), Jewish thinkers such as Maimonides, Roman Catholic scholastic thinkers such as Thomas Aquinas, and the case-oriented analysis (casuistry) of Catholic moral theology. These intellectual traditions continue in Catholic, Islamic and Jewish medical ethics.
By the 18th and 19th centuries, medical ethics emerged as a more self-conscious discourse. In England, Thomas Percival, a physician and author, crafted the first modern code of medical ethics. He drew up a pamphlet with the code in 1794 and wrote an expanded version in 1803, in which he coined the expressions "medical ethics" and "medical jurisprudence". However, there are some who see Percival's guidelines that relate to physician consultations as being excessively protective of the home physician's reputation. Jeffrey Berlant is one such critic who considers Percival's codes of physician consultations as being an early example of the anti-competitive, "guild"-like nature of the physician community. In addition, since the mid 19th century up to the 20th century, physician-patient relationships that once were more familiar became less prominent and less intimate, sometimes leading to malpractice, which resulted in less public trust and a shift in decision-making power from the paternalistic physician model to today's emphasis on patient autonomy and self-determination.
In 1815, the Apothecaries Act was passed by the Parliament of the United Kingdom. It introduced compulsory apprenticeship and formal qualifications for the apothecaries of the day under the license of the Society of Apothecaries. This was the beginning of regulation of the medical profession in the UK.
In 1847, the American Medical Association adopted its first code of ethics, with this being based in large part upon Percival's work. While the secularized field borrowed largely from Catholic medical ethics, in the 20th century a distinctively liberal Protestant approach was articulated by thinkers such as Joseph Fletcher. In the 1960s and 1970s, building upon liberal theory and procedural justice, much of the discourse of medical ethics went through a dramatic shift and largely reconfigured itself into bioethics.
Well-known medical ethics cases include:
Albert Kligman's dermatology experiments
Deep sleep therapy
Doctors' Trial
Greenberg v. Miami Children's Hospital Research Institute
Henrietta Lacks
Chester M. Southam's Cancer Injection Study
Human radiation experiments
Jesse Gelsinger
Moore v. Regents of the University of California
Surgical removal of body parts to try to improve mental health
Medical Experimentation on Black Americans
Milgram experiment
Radioactive iodine experiments
The Monster Study
Plutonium injections
The David Reimer case
The Stanford Prison Experiment
Tuskegee syphilis experiment
Willowbrook State School
Yanomami blood sample collection
Darkness in El Dorado
Since the 1970s, the growing influence of ethics in contemporary medicine can be seen in the increasing use of institutional review boards to evaluate experiments on human subjects, the establishment of hospital ethics committees, the expansion of the role of clinician ethicists, and the integration of ethics into many medical school curricula.
COVID-19
In December 2019, the virus COVID-19 emerged as a threat to worldwide public health and, over the following years, ignited novel inquiry into modern-age medical ethics. For example, since the first discovery of COVID-19 in Wuhan, China, and subsequent global spread by mid-2020, calls for the adoption of open science principles dominated research communities. Some academics believed that open science principles — like constant communication between research groups, rapid translation of study results into public policy, and transparency of scientific processes to the public — represented the only solutions to halt the impact of the virus. Others, however, cautioned that these interventions may lead to side-stepping safety in favor of speed, wasteful use of research capital, and creation of public confusion. Drawbacks of these practices include resource-wasting and public confusion surrounding the use of hydroxychloroquine and azithromycin as treatment for COVID-19 — a combination which was later shown to have no impact on COVID-19 survivorship and carried notable cardiotoxic side-effects — as well as a type of vaccine hesitancy specifically due to the speed at which COVID-19 vaccines were created and made publicly available. However, open science also allowed for the rapid implementation of life-saving public interventions like wearing masks and social distancing, the rapid development of multiple vaccines and monoclonal antibodies that have significantly lowered transmission and death rates, and increased public awareness about the severity of the pandemic as well as explanation of daily protective actions against COVID-19 infection, like hand washing.
Other notable areas of medicine impacted by COVID-19 ethics include:
Resource rationing, especially in intensive care units that did not have enough ventilators or beds to serve the influx of severely ill patients.
Lack of PPE for providers, putting them at increased risk of infection during patient care.
Heavy burden on healthcare providers and essential workers during entirety of pandemic
Closure of schools and increase in virtual schooling, which presented issues for families with limited internet access.
Magnification of disparities in health, causing the pandemic to impact BIPOC and disabled communities more so than other demographics worldwide.
Increase in hate crimes towards Asian-Americans, specifically Chinese-Americans related to COVID-19 related xenophobia.
Closure of businesses, offices, and restaurants resulted in increased unemployment and economic recession.
Vaccine hesitancy.
Refusal to mask or social distance, increasing transmission rates.
Cessation of non-essential medical procedures, delay of routine care, and conversion to telehealth as clinics and hospitals remained overwhelmed with COVID-19 patients.
The ethics of COVID-19 spans many more areas of medicine and society than represented in this paragraph — some of these principles will likely not be discovered until the end of the pandemic which, as of September 12, 2022, is still ongoing.
Values
A common framework used when analysing medical ethics is the "four principles" approach postulated by Tom Beauchamp and James Childress in their textbook Principles of Biomedical Ethics. It recognizes four basic moral principles, which are to be judged and weighed against each other, with attention given to the scope of their application. The four principles are:
Respect for autonomy – the patient has the right to refuse or choose their treatment.
Beneficence – a practitioner should act in the best interest of the patient.
Non-maleficence – to not be the cause of harm. Also, "Utility" – to promote more good than harm.
Justice – concerns the distribution of scarce health resources, and the decision of who gets what treatment.
Autonomy
The principle of autonomy, broken down into "autos" (self) and "nomos (rule), views the rights of an individual to self-determination. This is rooted in society's respect for individuals' ability to make informed decisions about personal matters with freedom. Autonomy has become more important as social values have shifted to define medical quality in terms of outcomes that are important to the patient and their family rather than medical professionals. The increasing importance of autonomy can be seen as a social reaction against the "paternalistic" tradition within healthcare. Some have questioned whether the backlash against historically excessive paternalism in favor of patient autonomy has inhibited the proper use of soft paternalism to the detriment of outcomes for some patients.
The definition of autonomy is the ability of an individual to make a rational, uninfluenced decision. Therefore, it can be said that autonomy is a general indicator of a healthy mind and body. The progression of many terminal diseases are characterized by loss of autonomy, in various manners and extents. For example, dementia, a chronic and progressive disease that attacks the brain, can induce memory loss and cause a decrease in rational thinking, almost always results in the loss of autonomy.
Psychiatrists and clinical psychologists are often asked to evaluate a patient's capacity for making life-and-death decisions at the end of life. Persons with a psychiatric condition such as delirium or clinical depression may lack capacity to make end-of-life decisions. For these persons, a request to refuse treatment may be taken in the context of their condition. Unless there is a clear advance directive to the contrary, persons lacking mental capacity are treated according to their best interests. This will involve an assessment involving people who know the person best to what decisions the person would have made had they not lost capacity. Persons with the mental capacity to make end-of-life decisions may refuse treatment with the understanding that it may shorten their life. Psychiatrists and psychologists may be involved to support decision making.
Beneficence
The term beneficence refers to actions that promote the well-being of others. In the medical context, this means taking actions that serve the best interests of patients and their families. However, uncertainty surrounds the precise definition of which practices do in fact help patients.
James Childress and Tom Beauchamp in Principles of Biomedical Ethics (1978) identify beneficence as one of the core values of healthcare ethics. Some scholars, such as Edmund Pellegrino, argue that beneficence is the only fundamental principle of medical ethics. They argue that healing should be the sole purpose of medicine, and that endeavors like cosmetic surgery and euthanasia are severely unethical and against the Hippocratic Oath.
Non-maleficence
The concept of non-maleficence is embodied by the phrase, "first, do no harm," or the Latin, primum non nocere. Many consider that should be the main or primary consideration (hence primum): that it is more important not to harm your patient, than to do them good, which is part of the Hippocratic oath that doctors take. This is partly because enthusiastic practitioners are prone to using treatments that they believe will do good, without first having evaluated them adequately to ensure they do no harm to the patient. Much harm has been done to patients as a result, as in the saying, "The treatment was a success, but the patient died." It is not only more important to do no harm than to do good; it is also important to know how likely it is that your treatment will harm a patient. So a physician should go further than not prescribing medications they know to be harmful—he or she should not prescribe medications (or otherwise treat the patient) unless s/he knows that the treatment is unlikely to be harmful; or at the very least, that patient understands the risks and benefits, and that the likely benefits outweigh the likely risks.
In practice, however, many treatments carry some risk of harm. In some circumstances, e.g. in desperate situations where the outcome without treatment will be grave, risky treatments that stand a high chance of harming the patient will be justified, as the risk of not treating is also very likely to do harm. So the principle of non-maleficence is not absolute, and balances against the principle of beneficence (doing good), as the effects of the two principles together often give rise to a double effect (further described in next section). Even basic actions like taking a blood sample or an injection of a drug cause harm to the patient's body. Euthanasia also goes against the principle of beneficence because the patient dies as a result of the medical treatment by the doctor.
Double effect
Double effect refers to two types of consequences that may be produced by a single action, and in medical ethics it is usually regarded as the combined effect of beneficence and non-maleficence.
A commonly cited example of this phenomenon is the use of morphine or other analgesic in the dying patient. Such use of morphine can have the beneficial effect of easing the pain and suffering of the patient while simultaneously having the maleficent effect of shortening the life of the patient through the deactivation of the respiratory system.
Respect for human rights
The human rights era started with the formation of the United Nations in 1945, which was charged with the promotion of human rights. The Universal Declaration of Human Rights (1948) was the first major document to define human rights. Medical doctors have an ethical duty to protect the human rights and human dignity of the patient so the advent of a document that defines human rights has had its effect on medical ethics. Most codes of medical ethics now require respect for the human rights of the patient.
The Council of Europe promotes the rule of law and observance of human rights in Europe. The Council of Europe adopted the European Convention on Human Rights and Biomedicine (1997) to create a uniform code of medical ethics for its 47 member-states. The Convention applies international human rights law to medical ethics. It provides special protection of physical integrity for those who are unable to consent, which includes children.
As of December 2013, the convention had been ratified or acceded to by twenty-nine member-states of the Council of Europe.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) also promotes the protection of human rights and human dignity. According to UNESCO, "Declarations are another means of defining norms, which are not subject to ratification. Like recommendations, they set forth universal principles to which the community of States wished to attribute the greatest possible authority and to afford the broadest possible support." UNESCO adopted the Universal Declaration on Human Rights and Biomedicine (2005) to advance the application of international human rights law in medical ethics. The Declaration provides special protection of human rights for incompetent persons.
Solidarity
Individualistic standards of autonomy and personal human rights as they relate to social justice seen in the Anglo-Saxon community, clash with and can also supplement the concept of solidarity, which stands closer to a European healthcare perspective focused on community, universal welfare, and the unselfish wish to provide healthcare equally for all. In the United States individualistic and self-interested healthcare norms are upheld, whereas in other countries, including European countries, a sense of respect for the community and personal support is more greatly upheld in relation to free healthcare.
Acceptance of ambiguity in medicine
The concept of normality, that there is a human physiological standard contrasting with conditions of illness, abnormality and pain, leads to assumptions and bias that negatively affects health care practice. It is important to realize that normality is ambiguous and that ambiguity in healthcare and the acceptance of such ambiguity is necessary in order to practice humbler medicine and understand complex, sometimes unusual usual medical cases. Thus, society's views on central concepts in philosophy and clinical beneficence must be questioned and revisited, adopting ambiguity as a central player in medical practice.
Conflicts
Between beneficence and non-maleficence
Beneficence can come into conflict with non-maleficence when healthcare professionals are deciding between a “first, do no harm” approach vs. a “first, do good” approach, such as when deciding whether or not to operate when the balance between the risk and benefit of the operation is not known and must be estimated. Healthcare professionals who place beneficence below other principles like non-maleficence may decide not to help a patient more than a limited amount if they feel they have met the standard of care and are not morally obligated to provide additional services. Young and Wagner argued that, in general, beneficence takes priority over non-maleficence (“first, do good,” not “first, do no harm”), both historically and philosophically.
Between autonomy and beneficence/non-maleficence
Autonomy can come into conflict with beneficence when patients disagree with recommendations that healthcare professionals believe are in the patient's best interest. When the patient's interests conflict with the patient's welfare, different societies settle the conflict in a wide range of manners. In general, Western medicine defers to the wishes of a mentally competent patient to make their own decisions, even in cases where the medical team believes that they are not acting in their own best interests. However, many other societies prioritize beneficence over autonomy. People deemed to not be mentally competent or having a mental disorder may be treated involuntarily.
Examples include when a patient does not want treatment because of, for example, religious or cultural views. In the case of euthanasia, the patient, or relatives of a patient, may want to end the life of the patient. Also, the patient may want an unnecessary treatment, as can be the case in hypochondria or with cosmetic surgery; here, the practitioner may be required to balance the desires of the patient for medically unnecessary potential risks against the patient's informed autonomy in the issue. A doctor may want to prefer autonomy because refusal to respect the patient's self-determination would harm the doctor-patient relationship.
Organ donations can sometimes pose interesting scenarios, in which a patient is classified as a non-heart beating donor (NHBD), where life support fails to restore the heartbeat and is now considered futile but brain death has not occurred. Classifying a patient as a NHBD can qualify someone to be subject to non-therapeutic intensive care, in which treatment is only given to preserve the organs that will be donated and not to preserve the life of the donor. This can bring up ethical issues as some may see respect for the donors wishes to donate their healthy organs as respect for autonomy, while others may view the sustaining of futile treatment during vegetative state maleficence for the patient and the patient's family. Some are worried making this process a worldwide customary measure may dehumanize and take away from the natural process of dying and what it brings along with it.
Individuals' capacity for informed decision-making may come into question during resolution of conflicts between autonomy and beneficence. The role of surrogate medical decision-makers is an extension of the principle of autonomy.
On the other hand, autonomy and beneficence/non-maleficence may also overlap. For example, a breach of patients' autonomy may cause decreased confidence for medical services in the population and subsequently less willingness to seek help, which in turn may cause inability to perform beneficence.
The principles of autonomy and beneficence/non-maleficence may also be expanded to include effects on the relatives of patients or even the medical practitioners, the overall population and economic issues when making medical decisions.
Euthanasia
There is disagreement among American physicians as to whether the non-maleficence principle excludes the practice of euthanasia. Euthanasia is currently legal in the states of Washington, DC, California, Colorado, Oregon, Vermont, and Washington. Around the world, there are different organizations that campaign to change legislation about the issue of physician-assisted death, or PAD. Examples of such organizations are the Hemlock Society of the United States and the Dignity in Dying campaign in the United Kingdom. These groups believe that doctors should be given the right to end a patient's life only if the patient is conscious enough to decide for themselves, is knowledgeable about the possibility of alternative care, and has willingly asked to end their life or requested access to the means to do so.
This argument is disputed in other parts of the world. For example, in the state of Louisiana, giving advice or supplying the means to end a person's life is considered a criminal act and can be charged as a felony. In state courts, this crime is comparable to manslaughter. The same laws apply in the states of Mississippi and Nebraska.
Informed consent
Informed consent refers to a patient's right to receive information relevant to a recommended treatment, in order to be able to make a well-considered, voluntary decision about their care. To give informed consent, a patient must be competent to make a decision regarding their treatment and be presented with relevant information regarding a treatment recommendation, including its nature and purpose, and the burdens, risks and potential benefits of all options and alternatives. After receiving and understanding this information, the patient can then make a fully informed decision to either consent or refuse treatment. In certain circumstances, there can be an exception to the need for informed consent, including, but not limited to, in cases of a medical emergency or patient incompetency. The ethical concept of informed consent also applies in a clinical research setting; all human participants in research must voluntarily decide to participate in the study after being fully informed of all relevant aspects of the research trial necessary to decide whether to participate or not. Informed consent is both an ethical and legal duty; if proper consent is not received prior to a procedure, treatment, or participation in research, providers can be held liable for battery and/or other torts. In the United States, informed consent is governed by both federal and state law, and the specific requirements for obtaining informed consent vary state to state.
Confidentiality
Confidentiality is commonly applied to conversations between doctors and patients. This concept is commonly known as patient-physician privilege. Legal protections prevent physicians from revealing their discussions with patients, even under oath in court.
Confidentiality is mandated in the United States by the Health Insurance Portability and Accountability Act of 1996 known as HIPAA, specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules have been carved out over the years. For example, many states require physicians to report gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles. Confidentiality is also challenged in cases involving the diagnosis of a sexually transmitted disease in a patient who refuses to reveal the diagnosis to a spouse, and in the termination of a pregnancy in an underage patient, without the knowledge of the patient's parents. Many states in the U.S. have laws governing parental notification in underage abortion. Those working in mental health have a duty to warn those who they deem to be at risk from their patients in some countries.
Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable tenet of medical practice. More recently, critics like Jacob Appel have argued for a more nuanced approach to the duty that acknowledges the need for flexibility in many cases.
Confidentiality is an important issue in primary care ethics, where physicians care for many patients from the same family and community, and where third parties often request information from the considerable medical database typically gathered in primary health care.
Privacy and the Internet
In increasing frequency, medical researchers are researching activities in online environments such as discussion boards and bulletin boards, and there is concern that the requirements of informed consent and privacy are not applied, although some guidelines do exist.
One issue that has arisen, however, is the disclosure of information. While researchers wish to quote from the original source in order to argue a point, this can have repercussions when the identity of the patient is not kept confidential. The quotations and other information about the site can be used to identify the patient, and researchers have reported cases where members of the site, bloggers and others have used this information as 'clues' in a game in an attempt to identify the site. Some researchers have employed various methods of "heavy disguise." including discussing a different condition from that under study.
Healthcare institutions' websites have the responsibility to ensure that the private medical records of their online visitors are secure from being marketed and monetized into the hands of drug companies, occupation records, and insurance companies. The delivery of diagnosis online leads patients to believe that doctors in some parts of the country are at the direct service of drug companies, finding diagnosis as convenient as what drug still has patent rights on it. Physicians and drug companies are found to be competing for top ten search engine ranks to lower costs of selling these drugs with little to no patient involvement.
With the expansion of internet healthcare platforms, online practitioner legitimacy and privacy accountability face unique challenges such as e-paparazzi, online information brokers, industrial spies, unlicensed information providers that work outside of traditional medical codes for profit. The American Medical Association (AMA) states that medical websites have the responsibility to ensure the health care privacy of online visitors and protect patient records from being marketed and monetized into the hands of insurance companies, employers, and marketers. [40] With the rapid unification of healthcare, business practices, computer science and e-commerce to create these online diagnostic websites, efforts to maintain health care system's ethical confidentiality standard need to keep up as well. Over the next few years, the Department of Health and Human Services has stated that they will be working towards lawfully protecting the online privacy and digital transfers of patient Electronic Medical Records (EMR) under The Health Insurance Portability and Accountability Act (HIPAA). [41] Looking forward, strong governance and accountability mechanisms will need to be considered with respect to digital health ecosystems, including potential metaverse healthcare platforms, to ensure the highest ethical standards are upheld relating to medical confidentiality and patient data.
Control, resolution and enforcement
In the UK, medical ethics forms part of the training of physicians and surgeons and disregard for ethical principles can result in doctors barred from medical practice after a decision by the Medical Practitioners Tribunal Service.
To ensure that appropriate ethical values are being applied within hospitals, effective hospital accreditation requires that ethical considerations are taken into account, for example with respect to physician integrity, conflict of interest, research ethics and organ transplantation ethics.
Guidelines
There is much documentation of the history and necessity of the Declaration of Helsinki. The first code of conduct for research including medical ethics was the Nuremberg Code. This document had large ties to Nazi war crimes, as it was introduced in 1997, so it didn't make much of a difference in terms of regulating practice. This issue called for the creation of the Declaration. There are some stark differences between the Nuremberg Code and the Declaration of Helsinki, including the way it is written. Nuremberg was written in a very concise manner, with a simple explanation. The Declaration of Helsinki is written with a thorough explanation in mind and including many specific commentaries.
In the United Kingdom, General Medical Council provides clear overall modern guidance in the form of its 'Good Medical Practice' statement. Other organizations, such as the Medical Protection Society and a number of university departments, are often consulted by British doctors regarding issues relating to ethics.
Ethics committees
Often, simple communication is not enough to resolve a conflict, and a hospital ethics committee must convene to decide a complex matter.
These bodies are composed primarily of healthcare professionals, but may also include philosophers, lay people, and clergy – indeed, in many parts of the world their presence is considered mandatory in order to provide balance.
With respect to the expected composition of such bodies in the US, Europe and Australia, the following applies.
The assignment of philosophers or religious clerics will reflect the importance attached by the society to the basic values involved. An example from Sweden with Torbjörn Tännsjö on a couple of such committees indicates secular trends gaining influence.
Cultural concerns
Cultural differences can create difficult medical ethics problems. Some cultures have spiritual or magical theories about the origins and cause of disease, for example, and reconciling these beliefs with the tenets of Western medicine can be very difficult. As different cultures continue to intermingle and more cultures live alongside each other, the healthcare system, which tends to deal with important life events such as birth, death and suffering, increasingly experiences difficult dilemmas that can sometimes lead to cultural clashes and conflict. Efforts to respond in a culturally sensitive manner go hand in hand with a need to distinguish limits to cultural tolerance.
Culture and language
As more people from different cultural and religious backgrounds move to other countries, among these, the United States, it is becoming increasingly important to be culturally sensitive to all communities in order to provide the best health care for all people. Lack of cultural knowledge can lead to misunderstandings and even inadequate care, which can lead to ethical problems. A common complaint patients have is feeling like they are not being heard, or perhaps, understood. Preventing escalating conflict can be accomplished by seeking interpreters, noticing body language and tone of both yourself and the patient as well as attempting to understand the patient's perspective in order to reach an acceptable option.
Some believe most medical practitioners in the future will have to be or greatly benefit from being bilingual. In addition to knowing the language, truly understanding culture is best for optimal care. Recently, a practice called 'narrative medicine' has gained some interest as it has a potential for improving patient-physician communication and understanding of patient's perspective. Interpreting a patient's stories or day-to-day activities as opposed to standardizing and collecting patient data may help in acquiring a better sense of what each patient needs, individually, with respect to their illness. Without this background information, many physicians are unable to properly understand the cultural differences that may set two different patients apart, and thus, may diagnose or recommend treatments that are culturally insensitive or inappropriate. In short, patient narrative has the potential for uncovering patient information and preferences that may otherwise be overlooked.
Medical humanitarianism
In order to address the underserved, uneducated communities in need of nutrition, housing, and healthcare disparities seen in much of the world today, some argue that we must fall back on ethical values in order to create a foundation to move towards a reasonable understanding, which encourages commitment and motivation to improve factors causing premature death as a goal in a global community. Such factors – such as poverty, environment and education – are said to be out of national or individual control and so this commitment is by default a social and communal responsibility placed on global communities that are able to aid others in need. This is based on the framework of 'provincial globalism,' which seeks a world in which all people have the capability to be healthy.
One concern regarding the intersection of medical ethics and humanitarian medical aid is how medical assistance can be as harmful as it is helpful to the community being served. One such example being how political forces may control how foreign humanitarian aid can be utilized in the region it is meant to be provided in. This would be congruous in situations where political strife could lead such aid being used in favor of one group over another. Another example of how foreign humanitarian aid can be misused in its intended community includes the possibility of dissonance forming between a foreign humanitarian aid group and the community being served. Examples of this could include the relationships being viewed between aid workers, style of dress, or the lack of education regarding local culture and customs.
Humanitarian practices in areas lacking optimum care can also pause other interesting and difficult ethical dilemmas in terms of beneficence and non-maleficence. Humanitarian practices are based upon providing better medical equipment and care for communities whose country does not provide adequate healthcare. The issues with providing healthcare to communities in need may sometimes be religious or cultural backgrounds keeping people from performing certain procedures or taking certain drugs. On the other hand, wanting certain procedures done in a specific manner due to religious or cultural belief systems may also occur. The ethical dilemma stems from differences in culture between communities helping those with medical disparities and the societies receiving aid. Women's rights, informed consent and education about health become controversial, as some treatments needed are against societal law, while some cultural traditions involve procedures against humanitarian efforts. Examples of this are female genital mutilation (FGM), aiding in reinfibulation, providing sterile equipment in order to perform procedures such as FGM, as well as informing patients of their HIV positive testing. The latter is controversial because certain communities have in the past outcast or killed HIV positive individuals.
Healthcare reform and lifestyle
Leading causes of death in the United States and around the world are highly related to behavioral consequences over genetic or environmental factors. This leads some to believe true healthcare reform begins with cultural reform, habit and overall lifestyle. Lifestyle, then, becomes the cause of many illnesses and the illnesses themselves are the result or side-effect of a larger problem. Some people believe this to be true and think that cultural change is needed in order for developing societies to cope and dodge the negative effects of drugs, food and conventional modes of transportation available to them. In 1990, tobacco use, diet, and exercise alone accounted for close to 80 percent of all premature deaths and continue to lead in this way through the 21st century. Heart disease, stroke, dementia, and diabetes are some of the diseases that may be affected by habit-forming patterns throughout our life. Some believe that medical lifestyle counseling and building healthy habits around our daily lives is one way to tackle health care reform.
Other cultures and healthcare
Buddhist medicine
Buddhist ethics and medicine are based on religious teachings of compassion and understanding of suffering and cause and effect and the idea that there is no beginning or end to life, but that instead there are only rebirths in an endless cycle. In this way, death is merely a phase in an indefinitely lengthy process of life, not an end. However, Buddhist teachings support living one's life to the fullest so that through all the suffering which encompasses a large part of what is life, there are no regrets. Buddhism accepts suffering as an inescapable experience, but values happiness and thus values life. Because of this, suicide and euthanasia, are prohibited. However, attempts to rid oneself of any physical or mental pain and suffering are seen as good acts. On the other hand, sedatives and drugs are thought to impair consciousness and awareness in the dying process, which is believed to be of great importance, as it is thought that one's dying consciousness remains and affects new life. Because of this, analgesics must not be part of the dying process, in order for the dying person to be present entirely and pass on their consciousness wholesomely. This can pose significant conflicts during end of life care in Western medical practice.
Chinese medicine
In traditional Chinese philosophy, human life is believed to be connected to nature, which is thought of as the foundation and encompassing force sustaining all of life's phases. Passing and coming of the seasons, life, birth and death are perceived as a cyclic and perpetual occurrences that are believed to be regulated by the principles of yin and yang. When one dies, the life-giving material force referred to as ch'i, encompassing both body and spirit, rejoins the material force of the universe and cycles on with respect to the rhythms set forth by yin and yang.
Because many Chinese people believe that circulation of both physical and 'psychic energy' is important to stay healthy, procedures which require surgery, as well as donations and transplantations of organs, are seen as a loss of ch'i, resulting in the loss of someone's vital energy supporting their consciousness and purpose in their lives. Furthermore, a person is never seen as a single unit but rather as a source of relationship, interconnected in a social web. Thus, it is believed that what makes a human one of us is relatedness and communication and family is seen as the basic unit of a community. This can greatly affect the way medical decisions are made among family members, as diagnoses are not always expected to be announced to the dying or sick, the elderly are expected to be cared for and represented by their children and physicians are expected to act in a paternalistic way. In short, informed consent as well as patient privacy can be difficult to enforce when dealing with Confucian families.
Furthermore, some Chinese people may be inclined to continue futile treatment in order to extend life and allow for fulfillment of the practice of benevolence and humanity. In contrast, patients with strong Daoist beliefs may see death as an obstacle and dying as a reunion with nature that should be accepted, and are therefore less likely to ask for treatment of an irreversible condition.
Islamic culture and medicine
Some believe Islamic medical ethics and framework remain poorly understood by many working in healthcare. It is important to recognize that for people of Islamic faith, Islam envelops and affects all aspects of life, not just medicine. Because many believe it is faith and a supreme deity that hold the cure to illness, it is common that the physician is viewed merely as help or intermediary player during the process of healing or medical care.
In addition to Chinese culture's emphasis on family as the basic unit of a community intertwined and forming a greater social construct, Islamic traditional medicine also places importance on the values of family and the well-being of a community. Many Islamic communities uphold paternalism as an acceptable part of medical care. However, autonomy and self-rule is also valued and protected and, in Islamic medicine, it is particularly upheld in terms of providing and expecting privacy in the healthcare setting. An example of this is requesting same gender providers in order to retain modesty. Overall, Beauchamp's principles of beneficence, non-maleficence and justice are promoted and upheld in the medical sphere with as much importance as in Western culture. In contrast, autonomy is important but more nuanced. Furthermore, Islam also brings forth the principles of jurisprudence, Islamic law and legal maxims, which also allow for Islam to adapt to an ever-changing medical ethics framework.
Conflicts of interest
Physicians should not allow a conflict of interest to influence medical judgment. In some cases, conflicts are hard to avoid, and doctors have a responsibility to avoid entering such situations. Research has shown that conflicts of interests are very common among both academic physicians and physicians in practice.
Referral
Doctors who receive income from referring patients for medical tests have been shown to refer more patients for medical tests. This practice is proscribed by the American College of Physicians Ethics Manual. Fee splitting and the payments of commissions to attract referrals of patients is considered unethical and unacceptable in most parts of the world.
Vendor relationships
Studies show that doctors can be influenced by drug company inducements, including gifts and food. Industry-sponsored Continuing Medical Education (CME) programs influence prescribing patterns. Many patients surveyed in one study agreed that physician gifts from drug companies influence prescribing practices. A growing movement among physicians is attempting to diminish the influence of pharmaceutical industry marketing upon medical practice, as evidenced by Stanford University's ban on drug company-sponsored lunches and gifts. Other academic institutions that have banned pharmaceutical industry-sponsored gifts and food include the Johns Hopkins Medical Institutions, University of Michigan, University of Pennsylvania, and Yale University.
Treatment of family members
The American Medical Association (AMA) states that "Physicians generally should not treat themselves or members of their immediate family". This code seeks to protect patients and physicians because professional objectivity can be compromised when the physician is treating a loved one. Studies from multiple health organizations have illustrated that physician-family member relationships may cause an increase in diagnostic testing and costs. Many doctors still treat their family members. Doctors who do so must be vigilant not to create conflicts of interest or treat inappropriately. Physicians that treat family members need to be conscious of conflicting expectations and dilemmas when treating relatives, as established medical ethical principles may not be morally imperative when family members are confronted with serious illness.
Sexual relationships
Sexual relationships between doctors and patients can create ethical conflicts, since sexual consent may conflict with the fiduciary responsibility of the physician. Out of the many disciplines in current medicine, there are studies that have been conducted in order to ascertain the occurrence of Doctor-Patient sexual misconduct. Results from those studies appear to indicate that certain disciplines are more likely to be offenders than others. Psychiatrists and obstetrician-gynecologists, for example, are two disciplines noted for having a higher rate of sexual misconduct. The violation of ethical conduct between doctors and patients also has an association with the age and sex of doctor and patient. Male physicians aged 40–59 years have been found to be more likely to have been reported for sexual misconduct; women aged 20–39 have been found to make up a significant portion of reported victims of sexual misconduct. Doctors who enter into sexual relationships with patients face the threats of losing their medical license and prosecution. In the early 1990s, it was estimated that 2–9% of doctors had violated this rule. Sexual relationships between physicians and patients' relatives may also be prohibited in some jurisdictions, although this prohibition is highly controversial.
Futility
In some hospitals, medical futility is referred to as treatment that is unable to benefit the patient. An important part of practicing good medical ethics is by attempting to avoid futility by practicing non-maleficence. What should be done if there is no chance that a patient will survive or benefit from a potential treatment but the family members insist on advanced care? Previously, some articles defined futility as the patient having less than a one percent chance of surviving. Some of these cases are examined in court.
Advance directives include living wills and durable powers of attorney for health care. ( | Biology and health sciences | General concepts | null |
19349161 | https://en.wikipedia.org/wiki/Cambrian%20explosion | Cambrian explosion | The Cambrian explosion (also known as Cambrian radiation or Cambrian diversification) is an interval of time beginning approximately in the Cambrian period of the early Paleozoic, when a sudden radiation of complex life occurred and practically all major animal phyla started appearing in the fossil record. It lasted for about 13 to 25 million years and resulted in the divergence of most modern metazoan phyla. The event was accompanied by major diversification in other groups of organisms as well.
Before early Cambrian diversification, most organisms were relatively simple, composed of individual cells or small multicellular organisms, occasionally organized into colonies. As the rate of diversification subsequently accelerated, the variety of life became much more complex and began to resemble that of today. Almost all present-day animal phyla appeared during this period, including the earliest chordates.
History and significance
The seemingly rapid appearance of fossils in the "Primordial Strata" was noted by William Buckland in the 1840s, and in his 1859 book On the Origin of Species, Charles Darwin discussed the then-inexplicable lack of earlier fossils as one of the main difficulties for his theory of descent with slow modification through natural selection. The long-running puzzlement about the seemingly-sudden appearance of the Cambrian fauna without evident precursor(s) centers on three key points: whether there really was a mass diversification of complex organisms over a relatively short period during the early Cambrian, what might have caused such rapid change, and what it would imply about the origin of animal life. Interpretation is difficult, owing to a limited supply of evidence based mainly on an incomplete fossil record and chemical signatures remaining in Cambrian rocks.
The first discovered Cambrian fossils were trilobites, described by Edward Lhuyd, the curator of Oxford Museum, in 1698. Although their evolutionary importance was not known, on the basis of their old age, William Buckland (1784–1856) realized that a dramatic step-change in the fossil record had occurred around the base of what we now call the Cambrian. Nineteenth-century geologists such as Adam Sedgwick and Roderick Murchison used the fossils for dating rock strata, specifically for establishing the Cambrian and Silurian periods. By 1859, leading geologists including Roderick Murchison were convinced that what was then called the lowest Silurian stratum showed the origin of life on Earth, though others, including Charles Lyell, differed. In On the Origin of Species, Darwin considered this sudden appearance of a solitary group of trilobites, with no apparent antecedents, and absent other fossils, to be "undoubtedly of the gravest nature" among the difficulties in his theory of natural selection. He reasoned that earlier seas had swarmed with living creatures, but that their fossils had not been found because of the imperfections of the fossil record. In the sixth edition of his book, he stressed his problem further as:
American paleontologist Charles Walcott, who studied the Burgess Shale fauna, proposed that an interval of time, the "Lipalian", was not represented in the fossil record or did not preserve fossils, and that the ancestors of the Cambrian animals evolved during this time.
Earlier fossil evidence has since been found. The earliest claim is that the history of life on Earth goes back : Rocks of that age at Warrawoona, Australia, were claimed to contain fossil stromatolites, stubby pillars formed by colonies of microorganisms. Fossils (Grypania) of more complex eukaryotic cells, from which all animals, plants and fungi are built, have been found in rocks from , in China and Montana. Rocks dating from contain fossils of the Ediacara biota, organisms so large that they are likely multicelled, but very unlike any modern organism. In 1948, Preston Cloud argued that a period of "eruptive" evolution occurred in the Early Cambrian, but as recently as the 1970s, no sign was seen of how the 'relatively' modern-looking organisms of the Middle and Late Cambrian arose.
The intense modern interest in this "Cambrian explosion" was sparked by the work of Harry B. Whittington and colleagues, who, in the 1970s, reanalysed many fossils from the Burgess Shale and concluded that several were as complex as, but different from, any living animals. The most common organism, Marrella, was clearly an arthropod, but not a member of any known arthropod class. Organisms such as the five-eyed Opabinia and spiny slug-like Wiwaxia were so different from anything else known that Whittington's team assumed they must represent different phyla, seemingly unrelated to anything known today. Stephen Jay Gould's popular 1989 account of this work, Wonderful Life, brought the matter into the public eye and raised questions about what the explosion represented. While differing significantly in details, both Whittington and Gould proposed that all modern animal phyla had appeared almost simultaneously in a rather short span of geological period. This view led to the modernization of Darwin's tree of life and the theory of punctuated equilibrium, which Eldredge and Gould developed in the early 1970s and which views evolution as long intervals of near-stasis "punctuated" by short periods of rapid change.
Other analyses, some more recent and some dating back to the 1970s, argue that complex animals similar to modern types evolved well before the start of the Cambrian.
Dating the Cambrian
Radiometric dates for much of the Cambrian, obtained by analysis of radioactive elements contained within rocks, have only recently become available, and for only a few regions.
Relative dating (A was before B) is often assumed sufficient for studying processes of evolution, but this, too, has been difficult, because of the problems involved in matching up rocks of the same age across different continents.
Therefore, dates or descriptions of sequences of events should be regarded with some caution until better data become available. In 2004, the start of the Cambrian was dated to 542 Ma. In 2012, it was revised to 541 Ma then in 2022 it was changed again to 538.8 Ma.
Some theory suggest Cambrian explosion occurred during the last stages of Gondwanan assembly, which is formed following Rodinia splitting, overlapped with the opening of the Iapetus Ocean between Laurentia and western Gondwana. The largest Cambrian faunal province is located around Gondwana, which extended from the low northern latitudes to the high southern latitudes, just short of the South Pole. By the middle and later parts of the Cambrian, continued rifting had sent the paleocontinents of Laurentia, Baltica and Siberia on their separate ways.
Body fossils
Fossils of organisms' bodies are usually the most informative type of evidence. Fossilization is a rare event, and most fossils are destroyed by erosion or metamorphism before they can be observed. Hence, the fossil record is very incomplete, increasingly so as earlier times are considered. Despite this, they are often adequate to illustrate the broader patterns of life's history. Also, biases exist in the fossil record: different environments are more favourable to the preservation of different types of organism or parts of organisms. Further, only the parts of organisms that were already mineralised are usually preserved, such as the shells of molluscs. Since most animal species are soft-bodied, they decay before they can become fossilised. As a result, although 30-plus phyla of living animals are known, two-thirds have never been found as fossils.
The Cambrian fossil record includes an unusually high number of lagerstätten, which preserve soft tissues. These allow paleontologists to examine the internal anatomy of animals, which in other sediments are only represented by shells, spines, claws, etc.—if they are preserved at all. The most significant Cambrian lagerstätten are the early Cambrian Maotianshan shale beds of Chengjiang (Yunnan, China) and Sirius Passet (Greenland), the middle Cambrian Burgess Shale (British Columbia, Canada) and the late Cambrian Orsten (Sweden) fossil beds.
While lagerstätten preserve far more than the conventional fossil record, they are far from complete. Because lagerstätten are restricted to a narrow range of environments (where soft-bodied organisms can be preserved very quickly, e.g. by mudslides), most animals are probably not represented; further, the exceptional conditions that create lagerstätten probably do not represent normal living conditions. In addition, the known Cambrian lagerstätten are rare and difficult to date, while Precambrian lagerstätten have yet to be studied in detail.
The sparseness of the fossil record means that organisms usually exist long before they are found in the fossil record—this is known as the Signor–Lipps effect.
In 2019, a "stunning" find of lagerstätten, known as the Qingjiang biota, was reported from the Danshui river in Hubei province, China. More than 20,000 fossil specimens were collected, including many soft bodied animals such as jellyfish, sea anemones and worms, as well as sponges, arthropods and algae. In some specimens the internal body structures were sufficiently preserved that soft tissues, including muscles, gills, mouths, guts and eyes, can be seen. The remains were dated to around 518 Mya and around half of the species identified at the time of reporting were previously unknown.
Trace fossils
Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding. Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilized hard parts, and reflects organisms' behaviour. Also, many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. While exact assignment of trace fossils to their makers is generally impossible, traces may, for example, provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms).
Geochemical observations
Several chemical markers indicate a drastic change in the environment around the start of the Cambrian. The markers are consistent with a mass extinction, or with a massive warming resulting from the release of methane ice.
Such changes may reflect a cause of the Cambrian explosion, although they may also have resulted from an increased level of biological activity—a possible result of the explosion. Despite these uncertainties, the geochemical evidence helps by making scientists focus on theories that are consistent with at least one of the likely environmental changes.
Phylogenetic techniques
Cladistics is a technique for working out the "family tree" of a set of organisms. It works by the logic that, if groups B and C have more similarities to each other than either has to group A, then B and C are more closely related to each other than either is to A. Characteristics that are compared may be anatomical, such as the presence of a notochord, or molecular, by comparing sequences of DNA or protein. The result of a successful analysis is a hierarchy of clades—groups whose members are believed to share a common ancestor. The cladistic technique is sometimes problematic, as some features, such as wings or camera eyes, evolved more than once, convergently—this must be taken into account in analyses.
From the relationships, it may be possible to constrain the date that lineages first appeared. For instance, if fossils of B or C date to X million years ago and the calculated "family tree" says A was an ancestor of B and C, then A must have evolved more than X million years ago.
It is also possible to estimate how long ago two living clades diverged—i.e. about how long ago their last common ancestor must have lived—by assuming that DNA mutations accumulate at a constant rate. These "molecular clocks", however, are fallible, and provide only a very approximate timing: they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved, and estimates produced by different techniques vary by a factor of two. However, the clocks can give an indication of branching rate, and when combined with the constraints of the fossil record, recent clocks suggest a sustained period of diversification through the Ediacaran and Cambrian.
Explanation of key scientific terms
Phylum
A phylum is the highest level in the Linnaean system for classifying organisms. Phyla can be thought of as groupings of animals based on general body plan. Despite the seemingly different external appearances of organisms, they are classified into phyla based on their internal and developmental organizations. For example, despite their obvious differences, spiders and barnacles both belong to the phylum Arthropoda, but earthworms and tapeworms, although similar in shape, belong to different phyla. As chemical and genetic testing becomes more accurate, previously hypothesised phyla are often entirely reworked.
A phylum is not a fundamental division of nature, such as the difference between electrons and protons. It is simply a very high-level grouping in a classification system created to describe all currently living organisms. This system is imperfect, even for modern animals: different books quote different numbers of phyla, mainly because they disagree about the classification of a huge number of worm-like species. As it is based on living organisms, it accommodates extinct organisms poorly, if at all.
Stem group
The concept of stem groups was introduced to cover evolutionary "aunts" and "cousins" of living groups, and have been hypothesized based on this scientific theory. A crown group is a group of closely related living animals plus their last common ancestor plus all its descendants. A stem group is a set of offshoots from the lineage at a point earlier than the last common ancestor of the crown group; it is a relative concept, for example tardigrades are living animals that form a crown group in their own right, but Budd (1996) regarded them as also being a stem group relative to the arthropods.
Triploblastic
The term Triploblastic means consisting of three layers, which are formed in the embryo, quite early in the animal's development from a single-celled egg to a larva or juvenile form. The innermost layer forms the digestive tract (gut); the outermost forms skin; and the middle one forms muscles and all the internal organs except the digestive system. Most types of living animal are triploblastic—the best-known exceptions are Porifera (sponges) and Cnidaria (jellyfish, sea anemones, etc.).
Bilaterian
The bilaterians are animals that have right and left sides at some point in their life histories. This implies that they have top and bottom surfaces and, importantly, distinct front and back ends. All known bilaterian animals are triploblastic, and all known triploblastic animals are bilaterian. Living echinoderms (sea stars, sea urchins, sea cucumbers, etc.) 'look' radially symmetrical (like wheels) rather than bilaterian, but their larvae exhibit bilateral symmetry and some of the earliest echinoderms may have been bilaterally symmetrical. Porifera and Cnidaria are radially symmetrical, not bilaterian, and not triploblastic (but the common Bilateria-Cnidaria ancestor's planula larva is suspected to be bilaterally symmetric).
Coelomate
The term Coelomate means having a body cavity (coelom) containing the internal organs. Most of the phyla featured in the debate about the Cambrian explosion are coelomates: arthropods, annelid worms, molluscs, echinoderms and chordates—the noncoelomate priapulids are an important exception. All known coelomate animals are triploblastic bilaterians, but some triploblastic bilaterian animals do not have a coelom—for example flatworms, whose organs are surrounded by unspecialized tissues.
Precambrian life
Evidence of animals around 1 billion years ago
Changes in the abundance and diversity of some types of fossil have been interpreted as evidence for "attacks" by animals or other organisms. Stromatolites, stubby pillars built by colonies of microorganisms, are a major constituent of the fossil record from about , but their abundance and diversity declined steeply after about . This decline has been attributed to disruption by grazing and burrowing animals.
Precambrian marine diversity was dominated by small fossils known as acritarchs. This term describes almost any small organic-walled fossil—from the egg cases of small metazoans to resting cysts of many different kinds of green algae. After appearing around , acritarchs underwent a boom around , increasing in abundance, diversity, size, complexity of shape, and especially size and number of spines. Their increasingly spiny forms in the last 1 billion years may indicate an increased need for defence against predation. Other groups of small organisms from the Neoproterozoic era also show signs of antipredator defenses. A consideration of taxon longevity appears to support an increase in predation pressure around this time.
In general, the fossil record shows a very slow appearance of these lifeforms in the Precambrian, with many cyanobacterial species making up much of the underlying sediment.
Ediacaran organisms
At the start of the Ediacaran period, much of the acritarch fauna, which had remained relatively unchanged for hundreds of millions of years, became extinct, to be replaced with a range of new, larger species, which would prove far more ephemeral. This radiation, the first in the fossil record, is followed soon after by an array of unfamiliar, large fossils dubbed the Ediacara biota, which flourished for 40 million years until the start of the Cambrian. Most of this "Ediacara biota" were at least a few centimeters long, significantly larger than any earlier fossils. The organisms form three distinct assemblages, increasing in size and complexity as time progressed.
Many of these organisms were quite unlike anything that appeared before or since, resembling discs, mud-filled bags, or quilted mattresses—one paleontologist proposed that the strangest organisms should be classified as a separate kingdom, Vendozoa.
At least some may have been early forms of the phyla at the heart of the "Cambrian explosion" debate, having been interpreted as early molluscs (Kimberella), echinoderms (Arkarua) and arthropods (Spriggina, Parvancorina, Yilingia). Still, debate exists about the classification of these specimens, mainly because the diagnostic features that allow taxonomists to classify more recent organisms, such as similarities to living organisms, are generally absent in the ediacarans. However, there seems little doubt that Kimberella was at least a triploblastic bilaterian animal. These organisms are central to the debate about how abrupt the Cambrian explosion was. If some were early members of the animal phyla seen today, the "explosion" looks a lot less sudden than if all these organisms represent an unrelated "experiment", and were replaced by the animal kingdom fairly soon thereafter (40 million years is "soon" by evolutionary and geological standards).
The traces of organisms moving on and directly underneath the microbial mats that covered the Ediacaran sea floor are preserved from the Ediacaran period, about . They were probably made by organisms resembling earthworms in shape, size and how they moved. The burrow-makers have never been found preserved, but, because they would need a head and a tail, the burrowers probably had bilateral symmetry—which would in all probability make them bilaterian animals. They fed above the sediment surface, but were forced to burrow to avoid predators.
Cambrian life
Trace fossils
Trace fossils (burrows, etc.) are a reliable indicator of what life was around, and indicate a diversification of life around the start of the Cambrian, with the freshwater realm colonized by animals almost as quickly as the oceans.
Small shelly fauna
Fossils known as "small shelly fauna" have been found in many parts on the world, and date from just before the Cambrian to about 10 million years after the start of the Cambrian (the Nemakit-Daldynian and Tommotian ages; see timeline). These are a very mixed collection of fossils: spines, sclerites (armor plates), tubes, archeocyathids (sponge-like animals) and small shells very like those of brachiopods and snail-like molluscs—but all tiny, mostly 1 to 2 mm long.
While small, these fossils are far more common than complete fossils of the organisms that produced them; crucially, they cover the window from the start of the Cambrian to the first lagerstätten: a period of time otherwise lacking in fossils. Hence, they supplement the conventional fossil record and allow the fossil ranges of many groups to be extended.
Cnidarians
The first cnidarian larvae, represented by the genus Eolarva, appeared in the Cambrian, although the identity of Eolarva as such is controversial. If it does represent a cnidarian larva, Eolarva would represent the first fossil evidence of indirect development in metazoans in the earliest Cambrian.
Medusozoans developed complex life cycles with a medusa stage during the Cambrian explosion, as evidenced by the discovery of Burgessomedusa phasmiformis.
Trilobites
The earliest trilobite fossils are about 530 million years old, but the class was already quite diverse and cosmopolitan, suggesting they had been around for quite some time.
The fossil record of trilobites began with the appearance of trilobites with mineral exoskeletons—not from the time of their origin.
Crustaceans
Crustaceans, one of the four great modern groups of arthropods, are very rare throughout the Cambrian. Convincing crustaceans were once thought to be common in Burgess Shale-type biotas, but none of these individuals can be shown to fall into the crown group of "true crustaceans". The Cambrian record of crown-group crustaceans comes from microfossils. The Swedish Orsten horizons contain later Cambrian crustaceans, but only organisms smaller than 2 mm are preserved. This restricts the data set to juveniles and miniaturised adults.
A more informative data source is the organic microfossils of the Mount Cap formation, Mackenzie Mountains, Canada. This late Early Cambrian assemblage () consists of microscopic fragments of arthropods' cuticle, which is left behind when the rock is dissolved with hydrofluoric acid. The diversity of this assemblage is similar to that of modern crustacean faunas. Analysis of fragments of feeding machinery found in the formation shows that it was adapted to feed in a very precise and refined fashion. This contrasts with most other early Cambrian arthropods, which fed messily by shovelling anything they could get their feeding appendages on into their mouths. This sophisticated and specialised feeding machinery belonged to a large (about 30 cm) organism, and would have provided great potential for diversification: Specialised feeding apparatus allows a number of different approaches to feeding and development, and creates a number of different approaches to avoid being eaten.
Echinoderms
The earliest generally accepted echinoderm fossils appeared in the Late Atdabanian; unlike modern echinoderms, these early Cambrian echinoderms were not all radially symmetrical. These provide firm data points for the "end" of the explosion, or at least indications that the crown groups of modern phyla were represented.
Burrowing
Around the start of the Cambrian (about ), many new types of traces first appear, including well-known vertical burrows such as Diplocraterion and Skolithos, and traces normally attributed to arthropods, such as Cruziana and Rusophycus. The vertical burrows indicate that worm-like animals acquired new behaviours, and possibly new physical capabilities. Some Cambrian trace fossils indicate that their makers possessed hard exoskeletons, although they were not necessarily mineralised. Meiofaunal as well as macrofaunal bilaterians participated in this invasion of infaunal niches.
Burrows provide firm evidence of complex organisms; they are also much more readily preserved than body fossils, to the extent that the absence of trace fossils has been used to imply the genuine absence of large, motile, bottom-dwelling organisms. They provide a further line of evidence to show that the Cambrian explosion represents a real diversification, and is not a preservational artefact.
Skeletonisation
The first Ediacaran and lowest Cambrian (Nemakit-Daldynian) skeletal fossils represent tubes and problematic sponge spicules. The oldest sponge spicules are monaxon siliceous, aged around , known from the Doushantuo Formation in China and from deposits of the same age in Mongolia, although the interpretation of these fossils as spicules has been challenged. In the late Ediacaran-lowest Cambrian, numerous tube dwellings of enigmatic organisms appeared. It was organic-walled tubes (e.g. Saarina) and chitinous tubes of the sabelliditids (e.g. Sokoloviina, Sabellidites, Paleolina) that prospered up to the beginning of the Tommotian. The mineralized tubes of Cloudina, Namacalathus, Sinotubulites and a dozen more of the other organisms from carbonate rocks formed near the end of the Ediacaran period from , as well as the triradially symmetrical mineralized tubes of anabaritids (e.g. Anabarites, Cambrotubulus) from uppermost Ediacaran and lower Cambrian. Ediacaran mineralized tubes are often found in carbonates of the stromatolite reefs and thrombolites, i.e. they could live in an environment adverse to the majority of animals.
Although they are as hard to classify as most other Ediacaran organisms, they are important in two other ways. First, they are the earliest known calcifying organisms (organisms that built shells from calcium carbonate). Secondly, these tubes are a device to rise over a substrate and competitors for effective feeding and, to a lesser degree, they serve as armor for protection against predators and adverse conditions of environment. Some Cloudina fossils show small holes in shells. The holes possibly are evidence of boring by predators sufficiently advanced to penetrate shells. A possible "evolutionary arms race" between predators and prey is one of the hypotheses that attempt to explain the Cambrian explosion.
In the lowest Cambrian, the stromatolites were decimated. This allowed animals to begin colonization of warm-water pools with carbonate sedimentation. At first, it was anabaritids and Protohertzina (the fossilized grasping spines of chaetognaths) fossils. Such mineral skeletons as shells, sclerites, thorns and plates appeared in uppermost Nemakit-Daldynian; they were the earliest species of halkierids, gastropods, hyoliths and other rare organisms. The beginning of the Tommotian has historically been understood to mark an explosive increase of the number and variety of fossils of molluscs, hyoliths and sponges, along with a rich complex of skeletal elements of unknown animals, the first archaeocyathids, brachiopods, tommotiids and others. Also soft-bodied extant phyla such as comb jellies, scalidophorans, entoproctans, horseshoe worms and lobopodians had armored forms. This sudden increase is partially an artefact of missing strata at the Tommotian-type section, and most of this fauna in fact began to diversify in a series of pulses through the Nemakit-Daldynian and into the Tommotian.
Some animals may already have had sclerites, thorns, and plates in the Ediacaran (e.g. Kimberella had hard sclerites, probably of carbonate), but thin carbonate skeletons cannot be fossilized in siliciclastic deposits. Older (~750 Ma) fossils indicate that mineralization long preceded the Cambrian, probably defending small photosynthetic algae from single-celled eukaryotic predators.
Burgess Shale type faunas
The Burgess Shale and similar lagerstätten preserve the soft parts of organisms, which provide a wealth of data to aid in the classification of enigmatic fossils. It often preserved complete specimens of organisms only otherwise known from dispersed parts, such as loose scales or isolated mouthparts. Further, the majority of organisms and taxa in these horizons are entirely soft-bodied, hence absent from the rest of the fossil record. Since a large part of the ecosystem is preserved, the ecology of the community can also be tentatively reconstructed.
However, the assemblages may represent a "museum": a deep-water ecosystem that is evolutionarily "behind" the rapidly diversifying fauna of shallower waters.
Because the lagerstätten provide a mode and quality of preservation that is virtually absent outside of the Cambrian, many organisms appear completely different from anything known from the conventional fossil record. This led early workers in the field to attempt to shoehorn the organisms into extant phyla; the shortcomings of this approach led later workers to erect a multitude of new phyla to accommodate all the oddballs. It has since been realised that most oddballs diverged from lineages before they established the phyla known today—slightly different designs, which were fated to perish rather than flourish into phyla, as their cousin lineages did.
The preservational mode is rare in the preceding Ediacaran period, but those assemblages known show no trace of animal life—perhaps implying a genuine absence of macroscopic metazoans.
Stages
The early Cambrian interval of diversification lasted for about the next 20–25 million years, and its elevated rates of evolution had ended by the base of Cambrian Series 2, , coincident with the first trilobites in the fossil record.
Different authors define intervals of diversification during the early Cambrian different ways:
Ed Landing recognizes three stages: Stage 1, spanning the Ediacaran-Cambrian boundary, corresponds to a diversification of biomineralizing animals and of deep and complex burrows; Stage 2, corresponding to the radiation of molluscs and stem-group Brachiopods (hyoliths and tommotiids), which apparently arose in intertidal waters; and Stage 3, seeing the Atdabanian diversification of trilobites in deeper waters, but little change in the intertidal realm.
Graham Budd synthesises various schemes to produce a compatible view of the SSF record of the Cambrian explosion, divided slightly differently into four intervals: a "Tube world", lasting from , spanning the Ediacaran-Cambrian boundary, dominated by Cloudina, Namacalathus and pseudoconodont-type elements; a "Sclerite world", seeing the rise of halkieriids, tommotiids and hyoliths, lasting to the end of the Fortunian (c. 525 Ma); a brachiopod world, perhaps corresponding to the as yet unratified Cambrian Stage 2; and Trilobite World, kicking off in Stage 3.
Complementary to the shelly fossil record, trace fossils can be divided into five subdivisions: "Flat world" (late Ediacaran), with traces restricted to the sediment surface; Protreozoic III (after Jensen), with increasing complexity; pedum world, initiated at the base of the Cambrian with the base of the T.pedum zone (see Cambrian#Dating the Cambrian); Rusophycus world, spanning and thus corresponding exactly to the periods of Sclerite World and Brachiopod World under the SSF paradigm; and Cruziana world, with an obvious correspondence to Trilobite World.
Validity
There is strong evidence for species of Cnidaria and Porifera existing in the Ediacaran and possible members of Porifera even before that during the Cryogenian. Bryozoans, once thought to not appear in the fossil record until after the Cambrian, are now known from strata of Cambrian Age 3 from Australia and South China.
The fossil record as Darwin knew it seemed to suggest that the major metazoan groups appeared in a few million years of the early to mid-Cambrian, and even in the 1980s, this still appeared to be the case.
However, evidence of Precambrian Metazoa is gradually accumulating. If the Ediacaran Kimberella was a mollusc-like protostome (one of the two main groups of coelomates), the protostome and deuterostome lineages must have split significantly before (deuterostomes are the other main group of coelomates). Even if it is not a protostome, it is widely accepted as a bilaterian. Since fossils of rather modern-looking cnidarians (jellyfish-like organisms) have been found in the Doushantuo lagerstätte, the cnidarian and bilaterian lineages must have diverged well over .
Trace fossils and predatory borings in Cloudina shells provide further evidence of Ediacaran animals. Some fossils from the Doushantuo formation have been interpreted as embryos and one (Vernanimalcula) as a bilaterian coelomate, although these interpretations are not universally accepted. Earlier still, predatory pressure has acted on stromatolites and acritarchs since around .
Some say that the evolutionary change was accelerated by an order of magnitude, but the presence of Precambrian animals somewhat dampens the "bang" of the explosion; not only was the appearance of animals gradual, but their evolutionary radiation ("diversification") may also not have been as rapid as once thought. Indeed, statistical analysis shows that the Cambrian explosion was no faster than any of the other radiations in animals' history. However, it does seem that some innovations linked to the explosion—such as resistant armour—only evolved once in the animal lineage; this makes a lengthy Precambrian animal lineage harder to defend. Further, the conventional view that all the phyla arose in the Cambrian is flawed; while the phyla may have diversified in this time period, representatives of the crown groups of many phyla do not appear until much later in the Phanerozoic. Further, the mineralised phyla that form the basis of the fossil record may not be representative of other phyla, since most mineralised phyla originated in a benthic setting. The fossil record is consistent with a Cambrian explosion that was limited to the benthos, with pelagic phyla evolving much later.
Ecological complexity among marine animals increased in the Cambrian, as well later in the Ordovician. However, recent research has overthrown the once-popular idea that disparity was exceptionally high throughout the Cambrian, before subsequently decreasing. In fact, disparity remains relatively low throughout the Cambrian, with modern levels of disparity only attained after the early Ordovician radiation.
The diversity of many Cambrian assemblages is similar to today's, and at a high (class/phylum) level, diversity is thought by some to have risen relatively smoothly through the Cambrian, stabilizing somewhat in the Ordovician. This interpretation, however, glosses over the astonishing and fundamental pattern of basal polytomy and phylogenetic telescoping at or near the Cambrian boundary, as seen in most major animal lineages. Thus Harry Blackmore Whittington's questions regarding the abrupt nature of the Cambrian explosion remain, and have yet to be satisfactorily answered.
The Cambrian explosion as survivorship bias
Budd and Mann suggested that the Cambrian explosion was the result of a type of survivorship bias called the "Push of the past". As groups at their origin tend to go extinct, it follows that any long-lived group would have experienced an unusually rapid rate of diversification early on, creating the illusion of a general speed-up in diversification rates. However, rates of diversification could remain at background levels and still generate this sort of effect in the surviving lineages.
Possible causes
Despite the evidence that moderately complex animals (triploblastic bilaterians) existed before and possibly long before the start of the Cambrian, it seems that the pace of evolution was exceptionally fast in the early Cambrian. Possible explanations for this fall into three broad categories: environmental, developmental and ecological changes. Any explanation must explain both the timing and magnitude of the explosion.
Changes in the environment
Increase in oxygen levels
Earth's earliest atmosphere contained no free oxygen (O2); the oxygen that animals breathe today, both in the air and dissolved in water, is the product of billions of years of photosynthesis. Cyanobacteria were the first organisms to evolve the ability to photosynthesize, introducing a steady supply of oxygen into the environment. Initially, oxygen levels did not increase substantially in the atmosphere. The oxygen quickly reacted with iron and other minerals in the surrounding rock and ocean water. Once a saturation point was reached for the reactions in rock and water, oxygen was able to exist as a gas in its diatomic form. Oxygen levels in the atmosphere increased substantially afterward. As a general trend, the concentration of oxygen in the atmosphere has risen gradually over about the last 2.5 billion years.
Oxygen levels seem to have a positive correlation with diversity in eukaryotes well before the Cambrian period. The last common ancestor of all extant eukaryotes is thought to have lived around 1.8 billion years ago. Around 800 million years ago, there was a notable increase in the complexity and number of eukaryotes species in the fossil record. Before the spike in diversity, eukaryotes are thought to have lived in highly sulfuric environments. Sulfide interferes with mitochondrial function in aerobic organisms, limiting the amount of oxygen that could be used to drive metabolism. Oceanic sulfide levels decreased around 800 million years ago, which supports the importance of oxygen in eukaryotic diversity. The increased ventilation of the oceans by sponges, which had already evolved and diversified during the late Neoproterozoic, has been proposed to have increased the availability of oxygen and powered the Cambrian's rapid diversification of multicellular life. Molybdenum isotopes show that increases in biodiversity were strongly correlated with expansion of oxygenated bottom waters in the Early Cambrian, lending support for oxygen as a driver of the Cambrian evolutionary radiation.
The shortage of oxygen might well have prevented the rise of large, complex animals. The amount of oxygen an animal can absorb is largely determined by the area of its oxygen-absorbing surfaces (lungs and gills in the most complex animals; the skin in less complex ones), while the amount needed is determined by its volume, which grows faster than the oxygen-absorbing area if an animal's size increases equally in all directions. An increase in the concentration of oxygen in air or water would increase the size to which an organism could grow without its tissues becoming starved of oxygen. However, members of the Ediacara biota reached metres in length tens of millions of years before the Cambrian explosion. Other metabolic functions may have been inhibited by lack of oxygen, for example the construction of tissue such as collagen, which is required for the construction of complex structures, or the biosynthesis of molecules for the construction of a hard exoskeleton. However, animals were not affected when similar oceanographic conditions occurred in the Phanerozoic; therefore, some see no forcing role of the oxygen level on evolution.
Ozone formation
The amount of ozone (O3) required to shield Earth from biologically lethal UV radiation, wavelengths from 200 to 300 nanometers (nm), is believed to have been in existence around the Cambrian explosion. The presence of the ozone layer may have enabled the development of complex life and life on land, as opposed to life being restricted to the water.
Snowball Earth
In the late Neoproterozoic (extending into the early Ediacaran period), the Earth suffered massive glaciations in which most of its surface was covered by ice. This may have caused a mass extinction, creating a genetic bottleneck; the resulting diversification may have given rise to the Ediacara biota, which appears soon after the last "Snowball Earth" episode.
However, the snowball episodes occurred a long time before the start of the Cambrian, and it is difficult to see how so much diversity could have been caused by even a series of bottlenecks; the cold periods may even have delayed the evolution of large size organisms. Massive rock erosion caused by glaciers during the "Snowball Earth" may have deposited nutrient-rich sediments into the oceans, setting the stage for the Cambrian explosion.
Increase in the calcium concentration of the Cambrian seawater
Newer research suggests that volcanically active midocean ridges caused a massive and sudden surge of the calcium concentration in the oceans, making it possible for marine organisms to build skeletons and hard body parts.
Alternatively a high influx of ions could have been provided by the widespread erosion that produced Powell's Great Unconformity.
An increase of calcium may also have been caused by erosion of the Transgondwanan Supermountain that existed at the time of the explosion. The roots of the mountain are preserved in present-day East Africa as an orogen.
Developmental explanations
A range of theories are based on the concept that minor modifications to animals' development as they grow from embryo to adult may have been able to cause very large changes in the final adult form. The Hox genes, for example, control which organs individual regions of an embryo will develop into. For instance, if a certain Hox gene is expressed, a region will develop into a limb; if a different Hox gene is expressed in that region (a minor change), it could develop into an eye instead (a phenotypically major change).
Such a system allows a large range of disparity to appear from a limited set of genes, but such theories linking this with the explosion struggle to explain why the origin of such a development system should by itself lead to increased diversity or disparity. Evidence of Precambrian metazoans combines with molecular data to show that much of the genetic architecture that could feasibly have played a role in the explosion was already well established by the Cambrian.
This apparent paradox is addressed in a theory that focuses on the physics of development. It is proposed that the emergence of simple multicellular forms provided a changed context and spatial scale in which novel physical processes and effects were mobilized by the products of genes that had previously evolved to serve unicellular functions. Morphological complexity (layers, segments, lumens, appendages) arose, in this view, by self-organization.
Horizontal gene transfer has also been identified as a possible factor in the rapid acquisition of the biochemical capability of biomineralization among organisms during this period, based on evidence that the gene for a critical protein in the process was originally transferred from a bacterium into sponges.
Ecological explanations
These focus on the interactions between different types of organism. Some of these hypotheses deal with changes in the food chain; some suggest arms races between predators and prey, and others focus on the more general mechanisms of coevolution. Such theories are well suited to explaining why there was a rapid increase in both disparity and diversity, but they do not explain why the "explosion" happened when it did.
End-Ediacaran mass extinction
Evidence for such an extinction includes the disappearance from the fossil record of the Ediacara biota and shelly fossils such as Cloudina, and the accompanying perturbation in the record. It is suspected that several global anoxic events were responsible for the extinction.
Mass extinctions are often followed by adaptive radiations as existing clades expand to occupy the ecospace emptied by the extinction. However, once the dust had settled, overall disparity and diversity returned to the pre-extinction level in each of the Phanerozoic extinctions.
Anoxia
The late Ediacaran oceans appears to have suffered from an anoxia that covered much of the seafloor, which would have given mobile animals with the ability to seek out more oxygen-rich environments an advantage over sessile forms of life.
Increase in sensory and cognitive abilities
Andrew Parker has proposed that predator-prey relationships changed dramatically after eyesight evolved. Prior to that time, hunting and evading were both close-range affairs—smell (chemoreception), vibration and touch were the only senses used. When predators could see their prey from a distance, new defensive strategies were needed. Armor, spines and similar defenses may also have evolved in response to vision. He further observed that, where animals lose vision in unlighted environments such as caves, diversity of animal forms tends to decrease. Nevertheless, many scientists doubt that vision could have caused the explosion. Eyes may well have evolved long before the start of the Cambrian. It is also difficult to understand why the evolution of eyesight would have caused an explosion, since other senses, such as smell and pressure detection, can detect things at a greater distance in the sea than sight can, but the appearance of these other senses apparently did not cause an evolutionary explosion.
One hypothesis posits that the development of increased cognitive abilities during the Cambrian drove diversity increase. This is evidenced by the fact that the novel ecological lifestyles created during the Cambrian required rapid, regular movement, a feature associated with brain-bearing organisms. The increasing complexity of brains, positively correlated with a greater range of motion and sensory abilities, enabled a wider range of novel ecological modes of life to come into being.
Arms races between predators and prey
The ability to avoid or recover from predation often makes the difference between life and death, and is therefore one of the strongest components of natural selection. The pressure to adapt is stronger on the prey than on the predator: if the predator fails to win a contest, it loses a meal; if the prey is the loser, it loses its life.
But, there is evidence that predation was rife long before the start of the Cambrian, for example in the increasingly spiny forms of acritarchs, the holes drilled in Cloudina shells, and traces of burrowing to avoid predators. Hence, it is unlikely that the appearance of predation was the trigger for the Cambrian "explosion", although it may well have exhibited a strong influence on the body forms that the "explosion" produced. However, the intensity of predation does appear to have increased dramatically during the Cambrian as new predatory "tactics" (such as shell-crushing) emerged. This rise of predation during the Cambrian was confirmed by the temporal pattern of the median predator ratio at the scale of genus, in fossil communities covering the Cambrian and Ordovician periods, but this pattern is not correlated to diversification rate. This lack of correlation between predator ratio and diversification over the Cambrian and Ordovician suggests that predators did not trigger the large evolutionary radiation of animals during this interval. Thus the role of predators as triggerers of diversification may have been limited to the very beginning of the "Cambrian explosion".
Increase in size and diversity of planktonic animals
Geochemical evidence strongly indicates that the total mass of plankton has been similar to modern levels since early in the Proterozoic. Before the start of the Cambrian, their corpses and droppings were too small to fall quickly towards the seabed, since their drag was about the same as their weight. This meant they were destroyed by scavengers or by chemical processes before they reached the sea floor.
Mesozooplankton are plankton of a larger size. Early Cambrian specimens filtered microscopic plankton from the seawater. These larger organisms would have produced droppings and ultimately corpses large enough to fall fairly quickly. This provided a new supply of energy and nutrients to the mid-levels and bottoms of the seas, which opened up a new range of possible ways of life. If any of these remains sank uneaten to the sea floor they could be buried; this would have taken some carbon out of circulation, resulting in an increase in the concentration of breathable oxygen in the seas (carbon readily combines with oxygen).
The initial herbivorous mesozooplankton were probably larvae of benthic (seafloor) animals. A larval stage was probably an evolutionary innovation driven by the increasing level of predation at the seafloor during the Ediacaran period.
Metazoans have an amazing ability to increase diversity through coevolution. This means that an organism's traits can lead to traits evolving in other organisms; a number of responses are possible, and a different species can potentially emerge from each one. As a simple example, the evolution of predation may have caused one organism to develop a defence, while another developed motion to flee. This would cause the predator lineage to diverge into two species: one that was good at chasing prey, and another that was good at breaking through defences. Actual coevolution is somewhat more subtle, but, in this fashion, great diversity can arise: three quarters of living species are animals, and most of the rest have formed by coevolution with animals.
Ecosystem engineering
Evolving organisms inevitably change the environment they evolve in. The Devonian colonization of land had planet-wide consequences for sediment cycling and ocean nutrients, and was likely linked to the Devonian mass extinction. A similar process may have occurred on smaller scales in the oceans, with, for example, the sponges filtering particles from the water and depositing them in the mud in a more digestible form; or burrowing organisms making previously unavailable resources available for other organisms.
Burrowing
Increases in burrowing changed the seafloor's geochemistry, and led to decreased oxygen in the ocean and increased CO2 levels in the seas and the atmosphere, resulting in global warming for tens of millions years, and could be responsible for mass extinctions. But as burrowing became established, it allowed an explosion of its own, for as burrowers disturbed the sea floor, they aerated it, mixing oxygen into the toxic muds. This made the bottom sediments more hospitable, and allowed a wider range of organisms to inhabit them—creating new niches and the scope for higher diversity.
Complexity threshold
The explosion may not have been a significant evolutionary event. It may represent a threshold being crossed: for example a threshold in genetic complexity that allowed a vast range of morphological forms to be employed. This genetic threshold may have a correlation to the amount of oxygen available to organisms. Using oxygen for metabolism produces much more energy than anaerobic processes. Organisms that use more oxygen have the opportunity to produce more complex proteins, providing a template for further evolution. These proteins translate into larger, more complex structures that allow organisms better to adapt to their environments. With the help of oxygen, genes that code for these proteins could contribute to the expression of complex traits more efficiently. Access to a wider range of structures and functions would allow organisms to evolve in different directions, increasing the number of niches that could be inhabited. Furthermore, organisms had the opportunity to become more specialized in their own niches.
Relationship with the Great Ordovician Biodiversification Event
After an extinction at the Cambrian–Ordovician boundary, another radiation occurred, which established the taxa that would dominate the Palaeozoic. This event, known as the Great Ordovician Biodiversification Event (GOBE), has been considered a "follow-up" to the Cambrian explosion. Recent studies have suggested that the Cambrian explosion were not two discrete events but one long evolutionary radiation. Analytical study of the Geobiodiversity Database (GBDB) and Paleobiology Database (PBDB) failed to find a statistical basis for separating the two radiations.
Some researchers have proposed the existence of a biodiversity gap during the Furongian separating the Cambrian explosion and GOBE known as the Furongian Gap. Studies of the Guole Konservat-Lagerstätte and similar fossil sites in South China have instead found the Furongian to instead be a time of rapid biological turnovers though, making the existence of the Furongian Gap highly controversial.
Uniqueness of the early Cambrian biodiversification
The "Cambrian explosion" can be viewed as two waves of metazoan expansion into empty niches: first, a coevolutionary rise in diversity as animals explored niches on the Ediacaran sea floor, followed by a second expansion in the early Cambrian as they became established in the water column. The rate of diversification seen in the Cambrian phase of the explosion is unparalleled among marine animals: it affected all metazoan clades of which Cambrian fossils have been found. Later radiations, such as those of fish in the Silurian and Devonian periods, involved fewer taxa, mainly with very similar body plans. Although the recovery from the Permian-Triassic extinction started with about as few animal species as the Cambrian explosion, the recovery produced far fewer significantly new types of animals.
Whatever triggered the early Cambrian diversification opened up an exceptionally wide range of previously unavailable ecological niches. When these were all occupied, limited space existed for such wide-ranging diversifications to occur again, because strong competition existed in all niches and incumbents usually had the advantage. If a wide range of empty niches had continued, clades would be able to continue diversifying and become disparate enough for us to recognise them as different phyla; when niches are filled, lineages will continue to resemble one another long after they diverge, as limited opportunity exists for them to change their life-styles and forms.
There were two similar explosions in the evolution of land plants: after a cryptic history beginning about , land plants underwent a uniquely rapid adaptive radiation during the Devonian period, about . Furthermore, angiosperms (flowering plants) originated and rapidly diversified during the Cretaceous period.
| Physical sciences | Geological history | null |
20536726 | https://en.wikipedia.org/wiki/Ocean%20dynamics | Ocean dynamics | Ocean dynamics define and describe the flow of water within the oceans. Ocean temperature and motion fields can be separated into three distinct layers: mixed (surface) layer, upper ocean (above the thermocline), and deep ocean.
Ocean dynamics has traditionally been investigated by sampling from instruments in situ.
The mixed layer is nearest to the surface and can vary in thickness from 10 to 500 meters. This layer has properties such as temperature, salinity and dissolved oxygen which are uniform with depth reflecting a history of active turbulence (the atmosphere has an analogous planetary boundary layer). Turbulence is high in the mixed layer. However, it becomes zero at the base of the mixed layer. Turbulence again increases below the base of the mixed layer due to shear instabilities. At extratropical latitudes this layer is deepest in late winter as a result of surface cooling and winter storms and quite shallow in summer. Its dynamics is governed by turbulent mixing as well as Ekman transport, exchanges with the overlying atmosphere, and horizontal advection.
The upper ocean, characterized by warm temperatures and active motion, varies in depth from 100 m or less in the tropics and eastern oceans to in excess of 800 meters in the western subtropical oceans. This layer exchanges properties such as heat and freshwater with the atmosphere on timescales of a few years. Below the mixed layer the upper ocean is generally governed by the hydrostatic and geostrophic relationships. Exceptions include the deep tropics and coastal regions.
The deep ocean is both cold and dark with generally weak velocities (although limited areas of the deep ocean are known to have significant recirculations). The deep ocean is supplied with water from the upper ocean in only a few limited geographical regions: the subpolar North Atlantic and several sinking regions around the Antarctic. Because of the weak supply of water to the deep ocean the average residence time of water in the deep ocean is measured in hundreds of years. In this layer as well the hydrostatic and geostrophic relationships are generally valid and mixing is generally quite weak.
Primitive equations
Ocean dynamics are governed by Newton's equations of motion expressed as the Navier-Stokes equations for a fluid element located at (x,y,z) on the surface of our rotating planet and moving at velocity (u,v,w) relative to that surface:
the zonal momentum equation:
the meridional momentum equation:
the vertical momentum equation (assumes the ocean is in hydrostatic balance):
the continuity equation (assumes the ocean is incompressible):
the temperature equation:
the salinity equation:
Here "u" is zonal velocity, "v" is meridional velocity, "w" is vertical velocity, "p" is pressure, "ρ" is density, "T" is temperature, "S" is salinity, "g" is acceleration due to gravity, "τ" is wind stress, and "f" is the Coriolis parameter. "Q" is the heat input to the ocean, while "P-E" is the freshwater input to the ocean.
Mixed layer dynamics
Mixed layer dynamics are quite complicated; however, in some regions some simplifications are possible. The wind-driven horizontal transport in the mixed layer is approximately described by Ekman Layer dynamics in which vertical diffusion of momentum balances the Coriolis effect and wind stress. This Ekman transport is superimposed on geostrophic flow associated with horizontal gradients of density.
Upper ocean dynamics
Horizontal convergences and divergences within the mixed layer due, for example, to Ekman transport convergence imposes a requirement that ocean below the mixed layer must move fluid particles vertically. But one of the implications of the geostrophic relationship is that the magnitude of horizontal motion must greatly exceed the magnitude of vertical motion. Thus the weak vertical velocities associated with Ekman transport convergence (measured in meters per day) cause horizontal motion with speeds of 10 centimeters per second or more. The mathematical relationship between vertical and horizontal velocities can be derived by expressing the idea of conservation of angular momentum for a fluid on a rotating sphere. This relationship (with a couple of additional approximations) is known to oceanographers as the Sverdrup relation. Among its implications is the result that the horizontal convergence of Ekman transport observed to occur in the subtropical North Atlantic and Pacific forces southward flow throughout the interior of these two oceans. Western boundary currents (the Gulf Stream and Kuroshio) exist in order to return water to higher latitude.
| Physical sciences | Oceanography | Earth science |
12618815 | https://en.wikipedia.org/wiki/Minneapolis%20Skyway%20System | Minneapolis Skyway System | The Minneapolis Skyway System is an interlinked collection of enclosed pedestrian footbridges that connect various buildings in 80 full city blocks over of Downtown Minneapolis, enabling people to walk in climate-controlled comfort year-round. The skyways are owned by individual buildings in Minneapolis, and as such they do not have uniform opening and closing times. The 9.5 miles of skyway are comparable to the Houston tunnel system, the systems in Canadian cities such as Toronto's PATH, Montreal's Underground City, Calgary's 11-mile Plus 15 system and the 8-mile Edmonton Pedway system.
The Minneapolis skyways connect the second or third floors of various office towers, hotels, banks, corporate and government offices, restaurants, and retail stores to the Nicollet Mall shopping district, the Mayo Clinic Square, and the sports facilities at Target Center, Target Field and U.S. Bank Stadium. Several condominium and apartment complexes are skyway-connected as well, allowing residents to live, work, and shop downtown without having to leave the skyway system.
History and development
The city's first skyways were planned by real estate developer Leslie Park and his architect Edward Baker (Baker Associates) in the early 1960s and built by Crown Iron Works Company of Minneapolis. Sensing pressure from indoor shopping malls such as Southdale Center, Park wanted to create a similar environment in Downtown Minneapolis that would offer a climate-controlled space and a way for pedestrians to move from building to building. He built two skyways connecting the newly constructed Northstar Center building to the Northwestern Bank Building and the Roanoke Building. The skyway to the Northwestern Bank Building was built in 1962 and the skyway to the Roanoke Building followed the next year. The second skyway still remains in use today and is the system's oldest segment.
The system grew to seven total segments by 1972, though many of the skyways remained disconnected from one another. The construction of the IDS Center in 1972 helped to unify the system. The building featured skyways in all four directions as well as a spacious atrium area called the Crystal Court, allowing it to act as a central hub for the entire system. In 1976, the Downtown Council produced the first formal maps and signage for the system.
The 1987 album Pleased to Meet Me by The Replacements contained a song entitled Skyway. Inspired by Minneapolis, the song used the skyway as a metaphor for unrequited love.
In 2016, the U.S. Bank Stadium became connected to the Minneapolis skyway via a mixed-use development of office buildings and apartment complexes in Downtown East, Minneapolis.
Notable buildings connected
Fifth Street Towers
Butler Square
IDS Center
Northstar Center
Foshay Tower
U.S. Bank Stadium
Target Center
Target Field
Hawthorne Transportation Center
Mayo Clinic Square
Minneapolis Central Library
Minneapolis Convention Center
University of St. Thomas
Capella Tower
Wells Fargo Center
33 South Sixth/Minneapolis City Center
Two22 (formerly Campbell Mithun Tower)
Ameriprise Financial Center
Hennepin County Government Center
US Bank Plaza
RBC Plaza
US Bancorp Center
AT&T Tower
100 Washington Square
510 Marquette Building
Rand Tower
Guides
Various guides to navigation exist including paper and online maps as well as an app.
| Technology | Bridges | null |
734845 | https://en.wikipedia.org/wiki/Shanghai%20Metro | Shanghai Metro | The Shanghai Metro (; Shanghainese: Zaon6he5 Di6thiq7) is a rapid transit system in Shanghai, operating urban and suburban transit services to 14 of its 16 municipal districts and to the neighboring township of Huaqiao, in Kunshan, Jiangsu Province.
Forming the vast majority of the broader, multi-operator Shanghai rail transit network, the Shanghai Metro system is the world's longest metro system by route length, totaling and the second largest system by number of stations, with 508 stations across 20 lines. It also ranks first in the world by annual ridership, with 3.88 billion rides delivered in 2019. The last daily ridership record was set on 9 March 2024, at 13.39 million rides. Since the pandemic, ridership still routinely stands at over 10 million on an average workday, accounting for 73% of trips on public transport in the city.
History
Opening to the public in 1993 with full-scale construction extending back to 1986, the Shanghai Metro is the third-oldest rapid transit system in mainland China, after the Beijing Subway and the Tianjin Metro. Though actual construction and inauguration of the Shanghai Metro succeeded its counterparts in Beijing and Tianjin, their initial planning would date back to the same period, during the late 50s and early 60s, before the impact of the Cultural Revolution.
The system saw its most rapid expansion during the years leading up to the 2010 World Expo, namely, between 2003 and 2010. Between 2007 and 2010, it was customary for new lines and extensions to open on an annual basis. The system is still expanding, with the most recent expansions opening in early 2024, and several new lines and extensions under construction.
1950-1965: Proposals and early groundbreaking
The first proposal of a subway system for Shanghai dates back to the year 1950. Against the backdrop of the air raids of Shanghai by the retreating Nationalist forces in that year, a team of Soviet technical specialists visiting the city made a proposal to the Municipal Committee on Urban Planning and Design for a dual-purpose underground railway system, to be used for mass transit during peace times, and as shelter facility in times of war. It was later, in 1953, during confidential consultations held with Soviet urban planning specialists by Li Gancheng, the then-Deputy Chief and Party Secretary of the Municipal Construction Committee in Shanghai, that the initial concepts of a north-south line and an east-west line were pencilled on a map of the city, which would later become Line 1 and Line 2. Further consultations and public surveys on transit needs were held in 1959 by a Municipal Planning Committee for Underground Railway, in conjunction with the Municipal Public Utilities Management Bureau, and identified multiple alternative plans for a subway system.
In 1960, with a newly-formed a Bureau of Tunnel Engineering, the city undertook an experimental shield tunneling project in Tangqiao, Pudong, excavating a tunnel with a 4.2m-diameter shield for over 100 meters. Dubbed Project 60, this project was carried out in strict confidentiality. In August 1964, the Tunnel Engineering section of the Municipal Urban Construction Bureau completed the route selection phase for the north-south line (later Line 1), which was eventually to connect key locations in the downtown core, including the Shanghai Cultural Square, People's Square and the then Shanghai North railway station, with the rapidly industrializing and urbanizing northern districts of Zhabei and Baoshan, including the industrial zone in Pengpu, the worker's residential area in Zhangmiao, and the town of Wusong. It is ostensibly in this same period that, in 1965, another experimental project on underground tunnel and station construction was underway in a segment between Hengshan Park and Xiangyang Park, both in Xuhui. However, construction halted during the immediately subsequent Cultural Revolution period, and no systematic plan to build an underground railway system materialized.
The 1980s: Renewed plans and initial construction of Line 1
The economic reforms of 1980s and the rapidly increasing demand for efficient urban public transit saw a swift resurrection of plans for a rapid rail transit system in Shanghai. In 1983, a jointly-published "Proposal on the Construction of a North-South Rapid Rail Transit Line" by the Municipal Planning Committee, the Municipal Construction Committee, in collaboration with the Municipal Bureaus of Urban Planning, of Public Infrastructure, of Railways and of Public Works, called for a rail transit line to be built which connects the city center with Minhang and Jinshan in the south-southwest, and with Wusong and Baoshan in the north-northeast, clearly echoing the initial north-south line concept of the 1950s-60s, though couched in this period in terms of the City's new master plan to "develop both the north and the south wings." Subsequently, in August 1985, a Project Planning Report submitted to the Municipal Planning Committee and the Municipal Committee on Urban and Rural Construction and Management by the Preparation Working Group on the North–South Rapid Rail Transit Line prioritizes the Xinlonghua-to-New railway station segment, and makes a conclusive case for the route of the previously-indeterminate middle segment of the line to be placed under Huaihai Road. Thus, the first stage of the first underground railway line, later Line 1, was determined.
Formal central government-level approval of both the construction of Line 1 and a long-term system-wide plan for the Shanghai Metro came in 1986. In that year, the State Council approved the Master City Plan of Shanghai (1983–2000), the first-ever such approval by the State Council in the history of Shanghai. Part of that Master Plan included a 40-year phased program that would eventually see the construction of 11 metro lines covering over 325 km by 2025. On August 14, 1986, the State Council approved the "Proposal Concerning Construction of Shanghai City Subway Line from Xinlonghua Station to Shanghai Railway Station," clearing the pathway for the beginning of construction of Line 1.
1993–2002: Inauguration and initial expansion
The southern section of line 1 (four stations) opened on May 28, 1993. The full line (including middle and northern sections) eventually opened on April 10, 1995, and in the first year, it handled an average of 600,000 passengers daily. The first phase of line 2 was inaugurated in June 2000, which in 2010 linked Hongqiao International Airport (SHA) and Pudong International Airport (PVG). The 25 km Pearl line (line 3) opened for revenue service in 2001. Line 5 opened in 2003. Line 4 joined the network in January 2006 and became a circular line in 2007.
The Master Plan of Shanghai Metro-Region 1999–2020 was approved by the State Council of China on May 11, 2001. The plan had 17 lines in total, containing four intra-city-region express rail lines, eight urban metro lines, and five urban light-rail lines with a total length of about 780 kilometers. The total length of the planned MRT network in the central city will add up to 488 kilometers. In addition, Shanghai will strengthen the development of the suburban rail transport network so that it can link to and coordinate with state rail lines, metro lines, and light railways. One or two rail transport lines are planned between every new city and the central city.
2003–2010: Rapid expansion for the Expo 2010
In 2003 when the length was only 3 lines, 65 kilometers (with a further 5 lines already under construction), Shanghai was named host city for the World Expo 2010, plans were made to extend the length of the Metro to 400 kilometers by the time it opened in 2010. Thereby it completed the initial 40-year plan 15 years ahead of schedule.
During Expo 2010 the metro system consisted of 11 lines, 407 km, and 277 stations.
2011–2021: Completion of a master plan
In 2009 Shanghai announced it would have 21 lines operating by 2020 with lines extending further into the suburban areas. At the end of 2021 (expected), most of the lines of the plan were opened (with an exemption of line 20, Jiamin line, and Chongming line) leading to 19 lines (line 1-18 and Pujiang), 802 km, 516 stations.
On October 16, 2013, with the extension of line 11 into Kunshan in Jiangsu province (about 6.5 km), Shanghai Metro became the first rapid transit system in China to provide cross-provincial service and the second intercity metro after the Guangfo Metro.
2021 onwards: Phase III construction
The National Development and Reform Commission has approved the 2018-2023 construction plan for the city's Metro network. The construction of five new metro lines (and two commuter rail lines) and two extensions to opened lines are expected to take five to six years and are planned to start construction before 2023. After completion, there will be 27 metro and commuter rail lines covering 1,154 kilometers.
With the Shanghai Master Plan, 2017-2035 more emphasis was put on other rail transit modes. The plan calls for a comprehensive transportation system that consists of multimodel rail transit. Intercity lines (intercity railway, municipality railway, and express railway), urban lines (subway and light rail), and local lines (modern tramcar, rubber-tired transit system) in a length of more than 1,000 km each.
By 2035, public transportation will account for over 50% of all means of transportation, and 60% of rail transit stations in the inner areas of the main city will have 600m of land coverage. According to the NDRC, the Shanghai Metro network (including commuter rail) will cover 1,642 kilometers in total by 2030 and more than 2,000 kilometers by 2035.
Ridership
Since 1993, the ridership of the entire network has grown as the new lines or sections come into operation. In 1995, the first year of operation, line 1 carried 62 million passengers (average daily passenger volume of 223,000). Ridership increased between 2011 and 2016 with 10% per annum, between 2017 and 2019 with 5%. The reduction in ridership in 2020 is due to COVID-19. Ridership recovered to close to pre-covid levels in 2021, with a ridership on December 31 of 13.014 million.
Lines
In service
There are currently 19 lines in operation, with lines and services denoted numerically as well as by characteristic colors, which are used as a visual aid for better distinction on station signage and on the exterior of trains, in the form of a colored block or belt.
Most tracks in the Shanghai Metro system are served by a single service; thus "Line X" usually refers to both the physical line and its service. The only exception is the segment shared by lines 3 and 4, between Hongqiao Road station and Baoshan Road station, where both services use the same tracks and platforms.
Future expansion
The Shanghai Metro system is one of the fastest-growing metro systems in the world. Ambitious expansion plans call for 25 lines with over of length by 2025. By then, every location in the central area of Shanghai will be within of a subway station. Shanghai metro is connected with the metro system of Suzhou Rail Transit; the Suzhou Rail Transit line 11 connects Shanghai Metro line 11 with Suzhou Rail Transit line 3.
Infrastructure
Rolling stock
There are currently over 7,000 railcars in the Shanghai metro system. The train fleet reached 1,000 cars in 2007, 2,000 cars in 2012, and 3,000 cars in 2016, the 4,000th car was delivered on December 17, 2016, the 5,000th car was delivered on July 20, 2018. The 7,000th car was delivered on December 25, 2020.
Most lines currently use semi-automatic train operations (STO/GoA2). Starting and stopping are automated, but a driver operates the doors, drives the train if needed and handles emergencies. The exceptions being:
Lines 2, 5 and 17: Driverless train operations (DTO/GoA3) train attendant operates the doors and drives the train in case of emergencies.
Lines 10, 14, 15, 18 and Pujiang line: Unattended train operations (UTO/GoA4) starting and stopping, operation of doors are all fully automated without any on-train staff. With a total length of 169 km it is the world's 2nd largest fully automated metro system, after the Singapore MRT.
Most lines currently use 6 car sets, with the exceptions being:
The Minhang Developing Zone branch of line 5, line 6 and Pujiang line, which uses 4 car sets.
Most trains on line 8 use 7 car sets.
Lines 1, 2 and 14 use 8 car sets.
On most lines the maximum operating speed is , with the exceptions being:
Lines 11 and 17 the maximum operating speed is .
Line 16 the maximum operating speed is .
Pujiang line is the only line using cars with rubber tires running on concrete tracks.
All subway cars have air-conditioning. During summer of 2021 the subway's first and last carriages on Metro lines 3-5, 10-13, and 15-18 will be 2 degrees Celsius warmer than the other carriages, the air-conditioning is adjustable for different carriages on these lines. The measure aims to address the needs of some passengers who find the trains "too cold," especially the elderly and children.
Platform screen doors
Almost all stations have (full height) platform screen doors with sliding acrylic glass at the platform edge. Only half height doors called automatic platform gates are placed at most of the elevated sections and the section of line 2 from Songhong Road to Longyang Road. The train stops with its doors lined-up with the sliding doors on the platform edge and open when the train doors open, and are closed at other times.
During construction of the early lines conditions were reserved for the installation of platform screen doors but not installed, due to cost considerations and no domestic companies making them at the time. In the early 2000s, before the screen doors were installed, the annual suicide rate on the Shanghai subway system averaged about eight. In 2003 Shanghai Metro Operation Technology Development Co., Ltd. developed domestically platform screen doors with costs only 40% of imported platform screen doors (they cost over RMB6 million each to install). , opened December 28, 2004, was the first station to have installed platform screen doors. To help cope with passenger handling, platform safety doors were built for line 4 onwards and a program for retrofitting older lines was put in place. The retrofitting on existing lines started in November 2005 with line 1 (first station was ) whose core stations had doors by the end of 2006. Originally, platform screen doors were adopted to prevent cool or hot air from leaving the station to reduce electricity usage.
Renewable energy
Shanghai metro started building solar plants from 2013 and the process has been accelerated since 2019, with plans to build rooftop solar plants with a total electricity generation capacity of 30 to 50 megawatts between 2021 and 2025. In 2021 it owned through it subsidiary Shanghai Metro New Energy Co., Ltd. ten rooftop solar plants on depots and parking lots (Chuanyanghe, Zhibei, Jinqiao, Longyang Road, Sanlin, Fujin Road, Zhongchun Road, Beizhai Road, Chentai Road and Pujiang Town) generating an average annual power generation of about 23 million kwh. Annual electricity consumption of Shanghai Metro exceeds 2.5 billion kWh.
Stations
There is cellular phone network coverage across the network. In 2020, all stations provided 5G network coverage. Free WiFi is also provided. There are toilets for passengers in more than 90% metro stations in Shanghai. The system is 100% wheelchair accessible, with elevators at all stations.
Safety
Riders are subject to searches of their persons and belongings at all stations by security inspectors using metal detectors, X-ray machines. Items banned from public transportation such as "guns, ammunition, knives, explosives, flammable and radioactive materials, and toxic chemicals" are subject to confiscation.
Stations are equipped with closed-circuit television. Police use it to arrest pickpockets caught on CCTV, for example.
Smoking is strictly prohibited in the metro premises. Bicycles (including folding bikes) and pets (including cats, dogs etc.) are not allowed in stations. The use of skateboards, roller skates and other equipment is not allowed in stations and carriages.
Since April 1, 2020, there is a national ban on "Uncivilized Behavior" on China's Subways, which also includes conduct rules cracking down on bad subway etiquette, such as stepping on seats, lying down on a bench or floor and playing music or videos out loud. It also bans eating and drinking on subway cars nationwide, with exceptions for infants and people with certain medical conditions.
First AEDs (automatic external defibrillator) were installed at Metro stations in 2015, with all metro stations having AEDs at the end of 2021.
Passenger information systems
Plasma screens on the platforms show passengers when the next two trains are coming, along with advertisements and public service announcements. The subway cars contain LCD screens showing advertisements and on some lines, the next stop, while above-ground trains have LED screens showing the next stop. The LED screens are being phased in on line 1 and are also included in lines 7 and 9, two underground lines.
Station signs are in Simplified Chinese and English. There are recorded messages stating the next stop in Mandarin, English, and (on lines 16 and 17 only) Shanghainese, but the messages stating nearby attractions or shops for a given station (a form of paid advertising) are in Mandarin only. The metro operating company is resistant to expanding use of Shanghainese for announcing stops, on the basis that, on most lines, the majority of passengers can understand either Mandarin or English.
The Metro authority has tested a new systematic numbering system for stations on line 10, but did not extend it to other lines.
On December 31, 2009, Shanghai launched a website displaying real-time comprehensive passenger flow information, each station and line is displayed as either green (normal operation), yellow (crowded), and red (suspended/not in operation).
Operations
Short turn service patterns
Short turn service patterns exist on all lines except line 16. Partial services serve only a (usually busier) sub-segment of the entire physical line.
Line 11, one of the three branch lines of the metro system, operates a different short turn service pattern. Trains traveling to and from the branch line terminate at Huaqiao Station and Sanlin respectively. Hence, a passenger who wants to travel from the terminus of the branch to Disney Resort, the eastern terminus of the line, must change trains.
Express services
Line 16, unlike the rest of the system, is built with passing loops and operates express and rapid services. The service was postponed on January 30, 2014, due to lack of available trains, but resumed on March 21, 2016.
Operating hours and train intervals
The operating hours for most Shanghai metro stations starts between 5:00 to 6:00 in the morning and ends between 22:30 to 23:00 CST. The current timetable is available on the Shanghai metro website.
The interval of trains during peak hours differ between 1 minutes and 50 seconds on line 9 and 6 minutes on line 18. Lines in the inner sections have train intervals under three minutes during morning peak hours and under 3 minutes and 45 seconds during evening peak hour. The more suburban outer sections, outside peak hours train intervals are longer.
Extended hours on Friday and Saturday
On lines in the city center on Fridays and Saturdays, operating hours are extended by an additional hour.
From April 1, 2017, the operating hours of lines 1, 2, and 7-10 were extended by an hour after the regular last train on each Friday, Saturday and the last working days before Chinese Public Holidays. Since July 1, 2017, this was extended such that lines 1-4 and 5-13. By the end of 2018, all the stations in the city center extended their operating hours after midnight on Fridays and Saturdays.
Since September 30, 2020, extended operation was resumed on lines 1, 2, 9 and 10. Since April 30, 2021, also extended weekend operation of lines 7 and 13 was resumed.
Extra trains from Hongqiao railway station
From Sunday to Thursday, there are two trains on both line 2 and 10 taking passengers from Hongqiao railway station and airport after normal operation time and only stop at selected stations.
Owners and operators
Fares and ticket system
Like many other metro systems in the world, Shanghai Metro uses a distance-based fare system. The system uses a "one-ticket network", which means that interchanging is possible between all interchange stations, given that the transfer staying within the Shanghai Metro system, without the purchase of another ticket where available, excluding some stations where transferring to another line at said station requires leaving the Fare Zone which mandates a Single-Journey Ticket be used before entering that of another line, requiring the purchase of another Single-Journey Ticket (Shanghai Public Transport Cards are exempt as they are not consumed upon exit). The Shanghai Public Transport Card, which allows access to most public transport in Shanghai under one card, is another form of payment.
All stations are equipped with Shanghai public transport card self-service recharge machines, some of which support card sales and card refund operations. Passengers can also choose to purchase public transport cards to travel.
Since 2005 automatic ticket vending machine, which accepted banknotes, appeared in Shanghai Metro stations. Automatic ticket vending machines are divided into "coins only" and "coins and banknotes are collected", the coin only machine collects 1 yuan and 0.5 yuan, and the coins and banknotes all accept 5, 10, 20, and 50 yuan banknotes and 1 and, 0.5 yuan coins. Vending machines will provide change.
Children under 1.3 meters
One or two children not taller than 1.3 meters (inclusive) are exempted from paying a fare in accompany of another passenger. In cases of more than two, the passenger should buy tickets. A preschool child, unattended by an adult, is not allowed to take the train alone.
Periodic pass
A pass for unlimited travel within the metro system for either 24 or 72 hours is offered. This pass is not available through vending machines, but has to be purchased at Service Centers at metro stations.
A one-day pass priced at 18 yuan. This pass was introduced on April 24, 2010, for the Expo 2010 held in Shanghai.
A three-day pass priced at 45 yuan. This pass was available since March 8, 2012.
A Maglev single trip ticket and metro ticket priced at 55 yuan. This pass allows for a ride on the Shanghai Maglev Train and unlimited travel within the metro system for 24 hours. A Maglev round trip and metro ticket is priced at 85 yuan.
Distance-based fare
The base fare is 3 yuan (RMB) for journeys under 6 km, then 1 yuan for each additional 10 km. As of December 2017, the highest fare is 15 yuan (travel between Oriental Land to Dishui Lake, the farthest distance at present). This fare scheme has not changed since September 15, 2005.
Shortest route calculated as multiple route available between any entry-exit stations.
Travel time limit is 4 hour. Additional lowest single journey fare (3 yuan) is required if time limit is exceeded.
For journeys exclusively from Xinzhuang Station to People's Square Station, the fare is 4 yuan, though the distance between People's Square Station and Xinzhuang Station is about .
Only passengers with unused tickets at the station on the day can refund tickets at the service center. Refunds can also be processed in the event of a train failure for more than 15 minutes, and the apology letter can be downloaded on the official website, WeChat public account and Metropolis app.
Single-Journey Ticket
Single-Journey tickets can be purchased from ticket vending machines, and at some stations, at a ticket window. Single-ride tickets are embedded with RFID contactless chips. When entering the system riders tap the ticket against a scanner above the turnstile, and when they exit they insert the ticket into a slot where it is stored and recycled. This ticket does not facilitate transfers at a virtual interchange station. Passengers would have to purchase a new ticket when reentering the fare gate.
Public transportation card
In addition to a single-ride ticket, the fare can be paid using a Shanghai public transport card. Transportation card of other cities that utilizes China T-Union can also be used in Shanghai Metro. This RFID-embedded card can be purchased at selected banks, convenience stores and metro stations with a 20-yuan deposit. This card can be loaded at ticket booths, Service Centers at the metro stations as well as many small convenience stores and banks throughout the city. The Shanghai Public Transportation Card can also be used to pay for other forms of transportation, such as taxi or bus. Refunds can be obtained at selected stations.
Discounts for SPTC holders:
Cumulative discount: Users of the Shanghai public transport card or QR codes get a 10% discount for the rest of the calendar month after paying 70 yuan in taking metro, The discount is applied only for journeys after the payment; it is not retroactively applied to previous journeys.
Virtual-transportation discount: Transfers at virtual interchange stations the fare will be calculated continuously. This discount is also applicable for T-Union transportation cards of other cities and Shanghai Public Transportation QR code.
Combined ride discount: Users of the Shanghai public transport card and QR codes get a 1 yuan discount when transferring to the metro within 120 minutes. (The 10% monthly discount may be applied after the transfer discount.) This discount also applies for a bus to Metro and bus to bus transfers and can accumulate over multiple transfers. Depending on the time spent at the destination the discount will be applied at the start of the return trip as well, making the cost of a round-trip 11 yuan instead of the 16 yuan that would normally be charged without the card.
Mobile payment methods
Passengers can also pay their Shanghai Metro fares using a mobile phone app, Daduhui (Metro Metropolis) since January 2018. The app requires one to scan a QR code when entering the fare gate at the origin station and again when exiting at the destination station. The fare is then deducted. The system supports Alipay, WeChat Pay and Union Pay, three of the most commonly used mobile payment methods in China.
The Shanghai Public Transportation QR Code, which is accessible through Alipay, WeChat and other applications, has been accepted on the Shanghai Metro since 2022. The usage of the Shanghai Public Transportation QR Code does not require bluetooth. Furthermore, users of the aforementioned QR Code are entitled to the same fare discounts as users of a Shanghai Public Transportation Card.
Fare evasion
The official reported daily fare evasion rate accounts for about 0.16% of the total passenger flow. In the Shanghai Metro fare evasion will result in a fine of 6 times the fare.
Shanghai Metro have been cooperating with police to crack down on subway fare evasion. In 2012, the Shanghai Metro has reported 202,457 counts fare evasion, and an additional 472,898 yuan of adjusted fare was collected. Since June 3, 2013, the subway operator announced that all evaders will be recorded in the personal credit information system, which may lead to obstacles in loan applications and job hunting in the future. However, in actual implementation, the subway law enforcement officers only took the above measures for those who refused to make up the fare; and in some stations where fare evasion often occurred, the ticket gates were changed from the original three-bar type to gate type gates.
Controversies and incidents
Class C cars
In 1999, Shanghai Electric and Alstom Metropolis signed an agreement to invest 28 million US dollars to establish Shanghai Alstom Transportation Equipment Co., Ltd., and introduce a rail transit train production line in Minhang, which would be able to assemble 300 trains annually. Shanghai Alstom only had the national license to produce C-class cars from its establishment and no license to produce A-class cars. At that time, the municipal government stipulated that Shanghai would purchase 300 C cars produced by the new company on lines 5, 6, and 8 of the future rail transit construction. The two parties reached an agreement on the purchase of 300 cars at that time. For this reason, the Transportation Research Institute had to "reduce" the predicted passenger flow to accommodate the C-class railcars, allowing for a reduction of the station's civil construction scope for the smaller trains. In the construction of lines 5, 6, and 8, the railcars were not supplied by the completion of the tender, but by a signed agreement for the railcars after "internal consultation and coordination" between Shentong Group and Shanghai Alstom, a violation of Articles 3 and 4 of the Law of the People's Republic of China on Tendering and Bidding.
The person in charge of the passenger flow forecasting project of line 8 confirmed that the passenger flow forecast report of line 8 was not completed until 2005 after continuous revision. However, in 2003 an agreement was signed for line 8 to supply 168 C-class vehicles, i.e., Shentong Group signed an agreement with Shanghai Alstom two years before the release of the forecast report, and decided to use the C-class car. At that time, it was predicted that the forecast passenger flow of line 8 would be about 500,000 passengers per day during the three years from 2007 to 2010. The operator used as initial forecast passenger flow of only 200,000 passengers per day. line 8 was extremely congested upon opening, even leading to physical conflicts between passengers. In 2010, to deal with the overcrowding Shanghai Metro hired passenger pushers to assist commuters boarding line 8 trains. Today, line 8 carries up to over 1 million passengers a day.
The estimated passenger flow of line 6 was more than 105,000. However, the highest passenger flow in the first few days of opening reached 150,000. With a headway of 13.5 minutes at opening and only four carriages, during peak hours people had to wait 45 minutes to get a ride. The relevant departments did not conduct a comprehensive survey of the residents around the proposed line to estimate passenger flow but instead household registration data was used which excludes migrant populations.
Other controversies
In June 2012, Shanghai Metro published a post on Weibo asking women to wear more clothing in public. The post argued that it was not surprising for women to be harassed in the subway if they are wearing revealing clothing and called on women to cherish themselves. This post attracted backlash from women's rights advocates and feminists who called the post misogynistic.
Train collisions and incidents
March 24, 2004: An evening crash at the turnaround line north of Tonghe Xincun station occurred during testing of the north extension of the line 1 (which opened on December 28, 2004), set 122 had a sideswipe with set 102, resulting in car 92113 damaged beyond repair. In 2007, the damaged car was replaced with a new car. Details of the crash have not been released since ever the accident.
December 22, 2009, at 5:58 a.m., an electrical fault in the tunnel between South Shaanxi Road station and People's Square station caused a few trains to stall. The tripping failure of the power supply catenary was caused by the top of the tunnel falling down and causing a short circuit. The segment between and was suspended. At 6:54 a.m., while the track was under repair, a low-speed collision occurred between two trains on line 1 at . As train set 0150 went to the Shanghai railway station's turnaround the signalling system sent a speed code of 65 km/h instead of 20 km/h, resulting in insufficient braking distance as the distance between set 0150 and set 117 was only 118 meters when the signal system sent a code of a speed of 0 km/h. As train set 0150 increased speed from 60.5 km/h to 62 km/h, the train driver commenced emergency braking, preventing a more serious collision. The train collided at a speed of 17 km/h with the side of the rear of set 117 entering the turnaround track from the down platform in the reverse direction. Nobody was injured, but the car 013151 and cars 98033 and 98042 which were badly damaged. There were no passengers aboard set 117. Service resumed at 11:48 a.m. Some passengers on set 0150 were in the train until 11 a.m. The crashes affected millions of morning commuters and occurred during Dongzhi Festival, when people visit cemeteries to pay tribute to their departed ancestors.At 8.40 p.m. another crash occurred and the - section was suspended again. A fire broke out at the substation in , due to a transformer failure caused by fluctuations in the power supply of the external network.
July 27, 2011, in the evening, after a train of line 10 was sent from station, it was supposed to be sent to station (branch line), but it was sent to by mistake, and then stopped at the station for passengers going to station to disembark. The Shanghai Metro claimed that the incident was caused by "signal debugging failure".
September 27, 2011, at 2:51 p.m., two trains on line 10 collided between Yuyuan Garden station and Laoximen station, injuring 284–300 people. Initial investigations found that train operators violated regulations while operating the trains manually after a loss of power on the line caused its signal system to fail. No deaths were reported.
March 12, 2013, at 16:12, the second car of a line 5 train derailed near station. There were no casualties. This caused the line 5 to run 48 minutes behind schedule. During the delay, the line was cut back to and service to the was served by the Jiangchuan Route 1 shuttle bus. The subway said the accident was caused by "signal equipment failure."
22 December 2024, at 8:00, a Line 11 train (set 1173) traveling at 100km/h between Malu and Wuwei Road collided with a construction crane working on a road connecting the Shanghai-Jiading Expressway and Jiamin Expressway. Most of the glass in the front carriage shattered, and the driver cabin partially collapsed. No people were injured, but the track and train was damaged. The track between Malu and Wuwei Road was closed until 21:30, after all repairs were made. A replacement bus was offered to commuters until then.
Platform screen door incidents
End of July 2006 at the Shanghai Stadium station on line 4 at about 7:50 p.m. a middle-aged woman accidentally caught her foot between the train and the platform as a group of passengers swarmed aboard the train. She pulled out her foot, but her shoe fell under the platform, and she suffered skin trauma.
July 15, 2007, at 3:34 p.m. 47-year-old Sun Mou entered a southbound line 1 train at Shanghai Indoor Stadium station and got caught between the screen door and the moving train. The train buzzer and screen door lights were activated while the passenger tried to get into the car, but he was unable to squeeze into the compartment due to the crowding. Sun's mother sued Shanghai Metro for 1.18 million yuan. The man was found to be carrying 0.29 grams of heroin, and the drug was detected in his blood. As platform screen door had repeatedly caught people, officials said safety switches will be installed on the inside of the screen doors. If someone touches the safety switch, the train will be suspended. A laser detection device was under development to reduce the chance of similar incidents.
July 24, 2007, at 2:41 p.m. A passenger entered a northbound line 1 train at Xujiahui station when the buzzer sounded. His laptop bag was caught between the train door and the screen door and hit the tunnel wall, falling to the roadbed as the train began to move. The laptop was valued at 11,350 yuan and Shanghai Metro was only willing to compensate 2,500 yuan. The passenger took the matter to court, and both parties were ultimately found at fault.
On July 5, 2010, at the Zhongshan Park station a woman died after she was dragged between the train and the platform screen doors when the train started moving, causing her body to collide with a safety barrier.
On June 6, 2018, at People's Square station on line 8 at about 4 p.m. a woman suffered a head injury after she became stuck between the platform screen doors. She later unsuccessfully sued the metro operator for hospital expenses, and claimed she did not hear the door closing chime.
On January 22, 2022, at 16:30 on line 15 Qi'an Road station an elderly woman was injured and later died after becoming trapped between the platform screen door and train.
Subway culture
Logo
The Shanghai Metro logo is a circular pattern composed of the first letters "S" and "M" of the English "Shanghai Metro", which means that the subway runs around the city and extends in all directions. The design reflects the rapid and convenient subway transportation and the speed of subway development. The logo is red, the font is black, and the background color is white:
Red symbolizes the young, vigorous and prosperous Shanghai subway business;
Black symbolizes the firm belief and pursuit of the subway enterprise to shoulder historical responsibilities and perseverance;
White symbolizes the brilliant vision of the subway employees' wisdom, talent and fighting spirit.
Mascot
On February 4, 2010, in the run-up to the 2010 World Expo in Shanghai, the subway mascot named Changchang () was unveiled. The mascot is a boy with red, white and blue as the main colors. Changchang means "happiness, smoothness, and imagination", which not only reflects the happiness that Shanghai subway brings to the city and life, but also reflects the dense network and unimpeded development of the subway throughout the city. It symbolizes its infinite possibilities to meet the diversified future.
Its helmet symbolizes technology and speed, and the subway logo on the helmet reflects the identity of the subway mascot;
The mask is based on the subway cab as the prototype, which represents the concept of operation, and also has the meaning of "leading";
The smiling eyes reflect the kindness and enthusiasm of Shanghai Metro, and it implies smiling service and warm transportation;
The "smooth" raised arms and the outstretched hands symbolize that the subway, as an important means of transportation in Shanghai, welcomes passengers at home and abroad with cordial service;
The feet represent the safety and comfort of the Shanghai subway;
The wheels on the feet symbolize technology and speed.
Other
Unveiled on August 13, 2021, Xujiahui Station has a statue of Liu Jianhang, chief engineer of Shanghai Metro when Line 1 was constructed in 1990.
Museum
Shanghai Metro Museum
Shanghai Maglev Museum
Network map and statistics
The following data displays the system length of Shanghai Metro and the number of stations.
| Technology | China | null |
735512 | https://en.wikipedia.org/wiki/K%E2%80%93Ar%20dating | K–Ar dating | Potassium–argon dating, abbreviated K–Ar dating, is a radiometric dating method used in geochronology and archaeology. It is based on measurement of the product of the radioactive decay of an isotope of potassium (K) into argon (Ar). Potassium is a common element found in many materials, such as feldspars, micas, clay minerals, tephra, and evaporites. In these materials, the decay product is able to escape the liquid (molten) rock but starts to accumulate when the rock solidifies (recrystallizes). The amount of argon sublimation that occurs is a function of the purity of the sample, the composition of the mother material, and a number of other factors. These factors introduce error limits on the upper and lower bounds of dating, so that the final determination of age is reliant on the environmental factors during formation, melting, and exposure to decreased pressure or open air. Time since recrystallization is calculated by measuring the ratio of the amount of accumulated to the amount of remaining. The long half-life of allows the method to be used to calculate the absolute age of samples older than a few thousand years.
The quickly cooled lavas that make nearly ideal samples for K–Ar dating also preserve a record of the direction and intensity of the local magnetic field as the sample cooled past the Curie temperature of iron. The geomagnetic polarity time scale was calibrated largely using K–Ar dating.
Decay series
Potassium naturally occurs in 3 isotopes: (93.2581%), (0.0117%), (6.7302%). and are stable. The isotope is radioactive; it decays with a half-life of to and . Conversion to stable occurs via electron emission (beta decay) in 89.3% of decay events. Conversion to stable occurs via electron capture in the remaining 10.7% of decay events.
Argon, being a noble gas, is a minor component of most rock samples of geochronological interest: It does not bind with other atoms in a crystal lattice. When decays to ; the atom typically remains trapped within the lattice because it is larger than the spaces between the other atoms in a mineral crystal. But it can escape into the surrounding region when the right conditions are met, such as changes in pressure or temperature. atoms can diffuse through and escape from molten magma because most crystals have melted and the atoms are no longer trapped. Entrained argon – diffused argon that fails to escape from the magma – may again become trapped in crystals when magma cools to become solid rock again. After the recrystallization of magma, more will decay and will again accumulate, along with the entrained argon atoms, trapped in the mineral crystals. Measurement of the quantity of atoms is used to compute the amount of time that has passed since a rock sample has solidified.
Despite being the favored daughter nuclide, it is rarely useful in dating because calcium is so common in the crust, with being the most abundant isotope. Thus, the amount of calcium originally present is not known and can vary enough to confound measurements of the small increases produced by radioactive decay.
Formula
The ratio of the amount of to that of is directly related to the time elapsed since the rock was cool enough to trap the Ar by the equation:
,
where:
t is time elapsed
t1/2 is the half-life of
Kf is the amount of remaining in the sample
Arf is the amount of found in the sample.
The scale factor 0.109 corrects for the unmeasured fraction of which decayed into ; the sum of the measured and the scaled amount of gives the amount of which was present at the beginning of the elapsed time period. In practice, each of these values may be expressed as a proportion of the total potassium present, as only relative, not absolute, quantities are required.
Obtaining the data
To obtain the content ratio of isotopes to in a rock or mineral, the amount of Ar is measured by mass spectrometry of the gases released when a rock sample is volatilized in vacuum. The potassium is quantified by flame photometry or atomic absorption spectroscopy.
The amount of is rarely measured directly. Rather, the more common is measured and that quantity is then multiplied by the accepted ratio of / (i.e., 0.0117%/93.2581%, see above).
The amount of is also measured to assess how much of the total argon is atmospheric in origin.
Assumptions
According to the following assumptions must be true for computed dates to be accepted as representing the true age of the rock:
The parent nuclide, , decays at a rate independent of its physical state and is not affected by differences in pressure or temperature. This is a well-founded major assumption, common to all dating methods based on radioactive decay. Although changes in the electron capture partial decay constant for possibly may occur at high pressures, theoretical calculations indicate that for pressures experienced within a body the size of the Earth the effects are negligibly small.
The / ratio in nature is constant so the is rarely measured directly, but is assumed to be 0.0117% of the total potassium. Unless some other process is active at the time of cooling, this is a very good assumption for terrestrial samples.
The radiogenic argon measured in a sample was produced by in situ decay of in the interval since the rock crystallized or was recrystallized. Violations of this assumption are not uncommon. Well-known examples of incorporation of extraneous include chilled glassy deep-sea basalts that have not completely outgassed preexisting *, and the physical contamination of a magma by inclusion of older xenolitic material. The Ar–Ar dating method was developed to measure the presence of extraneous argon.
Great care is needed to avoid contamination of samples by absorption of nonradiogenic from the atmosphere. The equation may be corrected by subtracting from the measured value the amount present in the air where is 295.5 times more plentiful than . decayed = measured − 295.5 × measured.
The sample must have remained a closed system since the event being dated. Thus, there should have been no loss or gain of or *, other than by radioactive decay of . Departures from this assumption are quite common, particularly in areas of complex geological history, but such departures can provide useful information that is of value in elucidating thermal histories. A deficiency of in a sample of a known age can indicate a full or partial melt in the thermal history of the area. Reliability in the dating of a geological feature is increased by sampling disparate areas which have been subjected to slightly different thermal histories.
Both flame photometry and mass spectrometry are destructive tests, so particular care is needed to ensure that the aliquots used are truly representative of the sample. Ar–Ar dating is a similar technique that compares isotopic ratios from the same portion of the sample to avoid this problem.
Applications
Due to the long half-life of , the technique is most applicable for dating minerals and rocks more than 100,000 years old. For shorter timescales, it is unlikely that enough will have had time to accumulate to be accurately measurable. K–Ar dating was instrumental in the development of the geomagnetic polarity time scale. Although it finds the most utility in geological applications, it plays an important role in archaeology. One archeological application has been in bracketing the age of archeological deposits at Olduvai Gorge by dating lava flows above and below the deposits. It has also been indispensable in other early east African sites with a history of volcanic activity such as Hadar, Ethiopia. The K–Ar method continues to have utility in dating clay mineral diagenesis. In 2017, the successful dating of illite formed by weathering was reported. This finding indirectly led to the dating of the strandflat of Western Norway from where the illite was sampled. Clay minerals are less than 2 μm thick and cannot easily be irradiated for Ar–Ar analysis because Ar recoils from the crystal lattice.
In 2013, the K–Ar method was used by the Mars Curiosity rover to date a rock on the Martian surface, the first time a rock has been dated from its mineral ingredients while situated on another planet.
| Physical sciences | Geochronology | Earth science |
735672 | https://en.wikipedia.org/wiki/Cellular%20network | Cellular network | A cellular network or mobile network is a telecommunications network where the link to and from end nodes is wireless and the network is distributed over land areas called cells, each served by at least one fixed-location transceiver (such as a base station). These base stations provide the cell with the network coverage which can be used for transmission of voice, data, and other types of content via radio waves. Each cell's coverage area is determined by factors such as the power of the transceiver, the terrain, and the frequency band being used. A cell typically uses a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed service quality within each cell.
When joined together, these cells provide radio coverage over a wide geographic area. This enables numerous devices, including mobile phones, tablets, laptops equipped with mobile broadband modems, and wearable devices such as smartwatches, to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the devices are moving through more than one cell during transmission. The design of cellular networks allows for seamless handover, enabling uninterrupted communication when a device moves from one cell to another.
Modern cellular networks utilize advanced technologies such as Multiple Input Multiple Output (MIMO), beamforming, and small cells to enhance network capacity and efficiency.
Cellular networks offer a number of desirable features:
More capacity than a single large transmitter, since the same frequency can be used for multiple links as long as they are in different cells
Mobile devices use less power than a single transmitter or satellite since the cell towers are closer
Larger coverage area than a single terrestrial transmitter, since additional cell towers can be added indefinitely and are not limited by the horizon
Capability of utilizing higher frequency signals (and thus more available bandwidth / faster data rates) that are not able to propagate at long distances
With data compression and multiplexing, several video (including digital video) and audio channels may travel through a higher frequency signal on a single wideband carrier
Major telecommunications providers have deployed voice and data cellular networks over most of the inhabited land area of Earth. This allows mobile phones and other devices to be connected to the public switched telephone network and public Internet access. In addition to traditional voice and data services, cellular networks now support Internet of Things (IoT) applications, connecting devices such as smart meters, vehicles, and industrial sensors.
The evolution of cellular networks from 1G to 5G has progressively introduced faster speeds, lower latency, and support for a larger number of devices, enabling advanced applications in fields such as healthcare, transportation, and smart cities.
Private cellular networks can be used for research or for large organizations and fleets, such as dispatch for local public safety agencies or a taxicab company, as well as for local wireless communications in enterprise and industrial settings such as factories, warehouses, mines, power plants, substations, oil and gas facilities and ports.
Concept
In a cellular radio system, a land area to be supplied with radio service is divided into cells in a pattern dependent on terrain and reception characteristics. These cell patterns roughly take the form of regular shapes, such as hexagons, squares, or circles although hexagonal cells are conventional. Each of these cells is assigned with multiple frequencies (f1 – f6) which have corresponding radio base stations. The group of frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent cells, which would cause co-channel interference.
The increased capacity in a cellular network, compared with a network with a single transmitter, comes from the mobile communication switching system developed by Amos Joel of Bell Labs that permitted multiple callers in a given area to use the same frequency by switching calls to the nearest available cellular tower having that frequency available. This strategy is viable because a given radio frequency can be reused in a different area for an unrelated transmission. In contrast, a single transmitter can only handle one transmission for a given frequency. Inevitably, there is some level of interference from the signal from the other cells which use the same frequency. Consequently, there must be at least one cell gap between cells which reuse the same frequency in a standard frequency-division multiple access (FDMA) system.
Consider the case of a taxi company, where each radio has a manually operated channel selector knob to tune to different frequencies. As drivers move around, they change from channel to channel. The drivers are aware of which frequency approximately covers some area. When they do not receive a signal from the transmitter, they try other channels until finding one that works. The taxi drivers only speak one at a time when invited by the base station operator. This is a form of time-division multiple access (TDMA).
History
The development of radio transmissions by inventor and engineer Guglielmo Marconi in 1895 set the stage for the future of broadcast communication, which would later form the basis of the cellular network.
While Marconi's work was instrumental in the future of the cellular network, the idea to establish a standard cellular phone network was first proposed on December 11, 1947. This proposal was put forward by Douglas H. Ring, a Bell Labs engineer, in an internal memo suggesting the development of a cellular telephone system by AT&T.
The first commercial cellular network, the 1G generation, was launched in Japan by Nippon Telegraph and Telephone (NTT) in 1979, initially in the metropolitan area of Tokyo. However, NTT did not initially commercialize the system; the early launch was motivated by an effort to understand a practical cellular system rather than by an interest to profit from it. In 1981, the Nordic Mobile Telephone system was created as the first network to cover an entire country. The network was released in 1981 in Sweden and Norway, then in early 1982 in Finland and Denmark. Televerket, a state-owned corporation responsible for telecommunications in Sweden, launched the system.
In September 1981, Jan Stenbeck, a financier and businessman, launched Comvik, a new Swedish telecommunications company. Comvik was the first European telecommunications firm to challenge the state's telephone monopoly on the industry. According to some sources, Comvik was the first to launch a commercial automatic cellular system before Televerket launched its own in October 1981. However, at the time of the new network’s release, the Swedish Post and Telecom Authority threatened to shut down the system after claiming that the company had used an unlicensed automatic gear that could interfere with its own networks. In December 1981, Sweden awarded Comvik with a license to operate its own automatic cellular network in the spirit of market competition.
The Bell System had developed cellular technology since 1947, and had cellular networks in operation in Chicago, Ill., and Dallas, TX., prior to 1979, but commercial service was delayed by the breakup of the Bell System, with cellular assets transferred to the Regional Bell Operating Companies. It wasn't until 1983 that the Bell System was commercialized in Chicago, under its subsidiary company, Ameritech, or as it's now called AT&T.
First-generation cellular network technology continued to expand its reach to the rest of the world. In 1990, Millicom Inc., a telecommunications service provider, strategically partnered with Comvik’s international cellular operations to become Millicom International Cellular SA. The company went on to establish a 1G systems foothold in Ghana, Africa under the brand name Mobitel. In 2006, the company’s Ghana operations were renamed to Tigo.
The wireless revolution began in the early 1990s, leading to the transition from analog to digital networks. The MOSFET invented at Bell Labs between 1955 and 1960, was adapted for cellular networks by the early 1990s, with the wide adoption of power MOSFET, LDMOS (RF amplifier), and RF CMOS (RF circuit) devices leading to the development and proliferation of digital wireless mobile networks.
The first commercial digital cellular network, the 2G generation, was launched in 1991. This sparked competition in the sector as the new operators challenged the incumbent 1G analog network operators.
Cell signal encoding
To distinguish signals from several different transmitters, a number of channel access methods have been developed, including frequency-division multiple access (FDMA, used by analog and D-AMPS systems), time-division multiple access (TDMA, used by GSM) and code-division multiple access (CDMA, first used for PCS, and the basis of 3G).
With FDMA, the transmitting and receiving frequencies used by different users in each cell are different from each other. Each cellular call was assigned a pair of frequencies (one for base to mobile, the other for mobile to base) to provide full-duplex operation. The original AMPS systems had 666 channel pairs, 333 each for the CLEC "A" system and ILEC "B" system. The number of channels was expanded to 416 pairs per carrier, but ultimately the number of RF channels limits the number of calls that a cell site could handle. FDMA is a familiar technology to telephone companies, which used frequency-division multiplexing to add channels to their point-to-point wireline plants before time-division multiplexing rendered FDM obsolete.
With TDMA, the transmitting and receiving time slots used by different users in each cell are different from each other. TDMA typically uses digital signaling to store and forward bursts of voice data that are fit into time slices for transmission, and expanded at the receiving end to produce a somewhat normal-sounding voice at the receiver. TDMA must introduce latency (time delay) into the audio signal. As long as the latency time is short enough that the delayed audio is not heard as an echo, it is not problematic. TDMA is a familiar technology for telephone companies, which used time-division multiplexing to add channels to their point-to-point wireline plants before packet switching rendered FDM obsolete.
The principle of CDMA is based on spread spectrum technology developed for military use during World War II and improved during the Cold War into direct-sequence spread spectrum that was used for early CDMA cellular systems and Wi-Fi. DSSS allows multiple simultaneous phone conversations to take place on a single wideband RF channel, without needing to channelize them in time or frequency. Although more sophisticated than older multiple access schemes (and unfamiliar to legacy telephone companies because it was not developed by Bell Labs), CDMA has scaled well to become the basis for 3G cellular radio systems.
Other available methods of multiplexing such as MIMO, a more sophisticated version of antenna diversity, combined with active beamforming provides much greater spatial multiplexing ability compared to original AMPS cells, that typically only addressed one to three unique spaces. Massive MIMO deployment allows much greater channel reuse, thus increasing the number of subscribers per cell site, greater data throughput per user, or some combination thereof. Quadrature Amplitude Modulation (QAM) modems offer an increasing number of bits per symbol, allowing more users per megahertz of bandwidth (and decibels of SNR), greater data throughput per user, or some combination thereof.
Frequency reuse
The key characteristic of a cellular network is the ability to reuse frequencies to increase both coverage and capacity. As described above, adjacent cells must use different frequencies, however, there is no problem with two cells sufficiently far apart operating on the same frequency, provided the masts and cellular network users' equipment do not transmit with too much power.
The elements that determine frequency reuse are the reuse distance and the reuse factor. The reuse distance, D is calculated as
,
where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius from . The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells.
The frequency reuse factor is the rate at which the same frequency can be used in the network. It is 1/K (or K according to some books) where K is the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12, depending on notation).
In case of N sector antennas on the same base station site, each with different direction, the base station site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a further division in frequency among N sector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM).
If the total available bandwidth is B, each cell can only use a number of frequency channels corresponding to a bandwidth of B/K, and each sector can use a bandwidth of B/NK.
Code-division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. While N is shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually.
Recently also orthogonal frequency-division multiple access based systems such as LTE are being deployed with a frequency reuse of 1. Since such systems do not spread the signal across the frequency band,
inter-cell radio resource management is important to coordinate resource allocation between different cell sites and to limit the inter-cell interference. There are various means of inter-cell interference coordination (ICIC) already defined in the standard. Coordinated scheduling, multi-site MIMO or multi-site beamforming are other examples for inter-cell radio resource management that might be standardized in the future.
Directional antennas
Cell towers frequently use a directional signal to improve reception in higher-traffic areas. In the United States, the Federal Communications Commission (FCC) limits omnidirectional cell tower signals to 100 watts of power. If the tower has directional antennas, the FCC allows the cell operator to emit up to 500 watts of effective radiated power (ERP).
Although the original cell towers created an even, omnidirectional signal, were at the centers of the cells and were omnidirectional, a cellular map can be redrawn with the cellular telephone towers located at the corners of the hexagons where three cells converge. Each tower has three sets of directional antennas aimed in three different directions with 120 degrees for each cell (totaling 360 degrees) and receiving/transmitting into three different cells at different frequencies. This provides a minimum of three channels, and three towers for each cell and greatly increases the chances of receiving a usable signal from at least one direction.
The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can be subdivided into smaller cells for high volume areas.
Cell phone companies also use this directional signal to improve reception along highways and inside buildings like stadiums and arenas.
Broadcast messages and paging
Practically every cellular system has some kind of broadcast mechanism. This can be used directly for distributing information to multiple mobiles. Commonly, for example in mobile telephony systems, the most important use of broadcast information is to set up channels for one-to-one communication between the mobile transceiver and the base station. This is called paging. The three different paging procedures generally adopted are sequential, parallel and selective paging.
The details of the process of paging vary somewhat from network to network, but normally we know a limited number of cells where the phone is located (this group of cells is called a Location Area in the GSM or UMTS system, or Routing Area if a data packet session is involved; in LTE, cells are grouped into Tracking Areas). Paging takes place by sending the broadcast message to all of those cells. Paging messages can be used for information transfer. This happens in pagers, in CDMA systems for sending SMS messages, and in the UMTS system where it allows for low downlink latency in packet-based connections.
In LTE/4G, the Paging procedure is initiated by the MME when data packets need to be delivered to the UE.
Paging types supported by the MME are:
Basic.
SGs_CS and SGs_PS.
QCI_1 through QCI_9.
Movement from cell to cell and handing over
In a primitive taxi system, when the taxi moved away from a first tower and closer to a second tower, the taxi driver manually switched from one frequency to another as needed. If communication was interrupted due to a loss of a signal, the taxi driver asked the base station operator to repeat the message on a different frequency.
In a cellular system, as the distributed mobile transceivers move from cell to cell during an ongoing continuous communication, switching from one cell frequency to a different cell frequency is done electronically without interruption and without a base station operator or manual switching. This is called the handover or handoff. Typically, a new channel is automatically selected for the mobile unit on the new base station which will serve it. The mobile unit then automatically switches from the current channel to the new channel and communication continues.
The exact details of the mobile system's move from one base station to the other vary considerably from system to system (see the example below for how a mobile phone network manages handover).
Mobile phone network
The most common example of a cellular network is a mobile phone (cell phone) network. A mobile phone is a portable telephone which receives or makes calls through a cell site (base station) or transmitting tower. Radio waves are used to transfer signals to and from the cell phone.
Modern mobile phone networks use cells because radio frequencies are a limited, shared resource. Cell-sites and handsets change frequency under computer control and use low power transmitters so that the usually limited number of radio frequencies can be simultaneously used by many callers with less interference.
A cellular network is used by the mobile phone operator to achieve both coverage and capacity for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight signal loss and to support a large number of active phones in that area. All of the cell sites are connected to telephone exchanges (or switches), which in turn connect to the public telephone network.
In cities, each cell site may have a range of up to approximately , while in rural areas, the range could be as much as . It is possible that in clear open areas, a user may receive signals from a cell site away. In rural areas with low-band coverage and tall towers, basic voice and messaging service may reach , with limitations on bandwidth and number of simultaneous calls.
Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS (analog), the term "cell phone" is in some regions, notably the US, used interchangeably with "mobile phone". However, satellite phones are mobile phones that do not communicate directly with a ground-based cellular tower but may do so indirectly by way of a satellite.
There are a number of different digital cellular technologies, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). The transition from existing analog to the digital standard followed a very different path in Europe and the US. As a consequence, multiple digital standards surfaced in the US, while Europe and many countries converged towards the GSM standard.
Structure of the mobile phone cellular network
A simple view of the cellular mobile-radio network consists of the following:
A network of radio base stations forming the base station subsystem.
The core circuit switched network for handling voice calls and text
A packet switched network for handling mobile data
The public switched telephone network to connect subscribers to the wider telephony network
This network is the foundation of the GSM system network. There are many functions that are performed by this network in order to make sure customers get the desired service including mobility management, registration, call set-up, and handover.
Any phone connects to the network via an RBS (Radio Base Station) at a corner of the corresponding cell which in turn connects to the Mobile switching center (MSC). The MSC provides a connection to the public switched telephone network (PSTN). The link from a phone to the RBS is called an uplink while the other way is termed downlink.
Radio channels effectively use the transmission medium through the use of the following multiplexing and access schemes: frequency-division multiple access (FDMA), time-division multiple access (TDMA), code-division multiple access (CDMA), and space-division multiple access (SDMA).
Small cells
Small cells, which have a smaller coverage area than base stations, are categorised as follows:
Microcell -> less than 2 kilometres,
Picocell -> less than 200 metres,
Femtocell -> around 10 metres,
Attocell -> 1–4 metres
Cellular handover in mobile phone networks
As the phone user moves from one cell area to another cell while a call is in progress, the mobile station will search for a new channel to attach to in order not to drop the call. Once a new channel is found, the network will command the mobile unit to switch to the new channel and at the same time switch the call onto the new channel.
With CDMA, multiple CDMA handsets share a specific radio channel. The signals are separated by using a pseudonoise code (PN code) that is specific to each phone. As the user moves from one cell to another, the handset sets up radio links with multiple cell sites (or sectors of the same site) simultaneously. This is known as "soft handoff" because, unlike with traditional cellular technology, there is no one defined point where the phone switches to the new cell.
In IS-95 inter-frequency handovers and older analog systems such as NMT it will typically be impossible to test the target channel directly while communicating. In this case, other techniques have to be used such as pilot beacons in IS-95. This means that there is almost always a brief break in the communication while searching for the new channel followed by the risk of an unexpected return to the old channel.
If there is no ongoing communication or the communication can be interrupted, it is possible for the mobile unit to spontaneously move from one cell to another and then notify the base station with the strongest signal.
Cellular frequency choice in mobile phone networks
The effect of frequency on cell coverage means that different frequencies serve better for different uses. Low frequencies, such as 450 MHz NMT, serve very well for countryside coverage. GSM 900 (900 MHz) is suitable for light urban coverage. GSM 1800 (1.8 GHz) starts to be limited by structural walls. UMTS, at 2.1 GHz is quite similar in coverage to GSM 1800.
Higher frequencies are a disadvantage when it comes to coverage, but it is a decided advantage when it comes to capacity. Picocells, covering e.g. one floor of a building, become possible, and the same frequency can be used for cells which are practically neighbors.
Cell service area may also vary due to interference from transmitting systems, both within and around that cell. This is true especially in CDMA based systems. The receiver requires a certain signal-to-noise ratio, and the transmitter should not send with too high transmission power in view to not cause interference with other transmitters. As the receiver moves away from the transmitter, the power received decreases, so the power control algorithm of the transmitter increases the power it transmits to restore the level of received power. As the interference (noise) rises above the received power from the transmitter, and the power of the transmitter cannot be increased anymore, the signal becomes corrupted and eventually unusable. In CDMA-based systems, the effect of interference from other mobile transmitters in the same cell on coverage area is very marked and has a special name, cell breathing.
One can see examples of cell coverage by studying some of the coverage maps provided by real operators on their web sites or by looking at independently crowdsourced maps such as Opensignal or CellMapper. In certain cases they may mark the site of the transmitter; in others, it can be calculated by working out the point of strongest coverage.
A cellular repeater is used to extend cell coverage into larger areas. They range from wideband repeaters for consumer use in homes and offices to smart or digital repeaters for industrial needs.
Cell size
The following table shows the dependency of the coverage area of one cell on the frequency of a CDMA2000 network:
| Technology | Networks | null |
735965 | https://en.wikipedia.org/wiki/Valence%20bond%20theory | Valence bond theory | In chemistry, valence bond (VB) theory is one of the two basic theories, along with molecular orbital (MO) theory, that were developed to use the methods of quantum mechanics to explain chemical bonding. It focuses on how the atomic orbitals of the dissociated atoms combine to give individual chemical bonds when a molecule is formed. In contrast, molecular orbital theory has orbitals that cover the whole molecule.
History
In 1916, G. N. Lewis proposed that a chemical bond forms by the interaction of two shared bonding electrons, with the representation of molecules as Lewis structures. The chemist Charles Rugeley Bury suggested in 1921 that eight and eighteen electrons in a shell form stable configurations. Bury proposed that the electron configurations in transitional elements depended upon the valence electrons in their outer shell. In 1916, Kossel put forth his theory of the ionic chemical bond (octet rule), also independently advanced in the same year by Gilbert N. Lewis. Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904).
Although there is no mathematical formula either in chemistry or quantum mechanics for the arrangement of electrons in the atom, the hydrogen atom can be described by the Schrödinger equation and the Matrix Mechanics equation both derived in 1925. However, for hydrogen alone, in 1927 the Heitler–London theory was formulated which for the first time enabled the calculation of bonding properties of the hydrogen molecule H2 based on quantum mechanical considerations. Specifically, Walter Heitler determined how to use Schrödinger's wave equation (1926) to show how two hydrogen atom wavefunctions join together, with plus, minus, and exchange terms, to form a covalent bond. He then called up his associate Fritz London and they worked out the details of the theory over the course of the night. Later, Linus Pauling used the pair bonding ideas of Lewis together with Heitler–London theory to develop two other key concepts in VB theory: resonance (1928) and orbital hybridization (1930). According to Charles Coulson, author of the noted 1952 book Valence, this period marks the start of "modern valence bond theory", as contrasted with older valence bond theories, which are essentially electronic theories of valence couched in pre-wave-mechanical terms.
Linus Pauling published in 1931 his landmark paper on valence bond theory: "On the Nature of the Chemical Bond". Building on this article, Pauling's 1939 textbook: On the Nature of the Chemical Bond would become what some have called the bible of modern chemistry. This book helped experimental chemists to understand the impact of quantum theory on chemistry. However, the later edition in 1959 failed to adequately address the problems that appeared to be better understood by molecular orbital theory. The impact of valence theory declined during the 1960s and 1970s as molecular orbital theory grew in usefulness as it was implemented in large digital computer programs. Since the 1980s, the more difficult problems, of implementing valence bond theory into computer programs, have been solved largely, and valence bond theory has seen a resurgence.
Theory
According to this theory a covalent bond is formed between two atoms by the overlap of half filled valence atomic orbitals of each atom containing one unpaired electron. Valence Bond theory describes chemical bonding better than Lewis Theory, which states that atoms share or transfer electrons so that they achieve the octet rule. It does not take into account orbital interactions or bond angles, and treats all covalent bonds equally. A valence bond structure resembles a Lewis structure, but when a molecule cannot be fully represented by a single Lewis structure, multiple valence bond structures are used. Each of these VB structures represents a specific Lewis structure. This combination of valence bond structures is the main point of resonance theory. Valence bond theory considers that the overlapping atomic orbitals of the participating atoms form a chemical bond. Because of the overlapping, it is most probable that electrons should be in the bond region. Valence bond theory views bonds as weakly coupled orbitals (small overlap). Valence bond theory is typically easier to employ in ground state molecules. The core orbitals and electrons remain essentially unchanged during the formation of bonds.
The overlapping atomic orbitals can differ. The two types of overlapping orbitals are sigma and pi. Sigma bonds occur when the orbitals of two shared electrons overlap head-to-head, with the electron density most concentrated between nuclei. Pi bonds occur when two orbitals overlap when they are parallel. For example, a bond between two s-orbital electrons is a sigma bond, because two spheres are always coaxial. In terms of bond order, single bonds have one sigma bond, double bonds consist of one sigma bond and one pi bond, and triple bonds contain one sigma bond and two pi bonds. However, the atomic orbitals for bonding may be hybrids. Hybridization is a model that describes how atomic orbitals combine to form new orbitals that better match the geometry of molecules. Atomic orbitals that are similar in energy combine to make hybrid orbitals. For example, the carbon in methane (CH4) undergoes sp3 hybridization to form four equivalent orbitals, resulting in a tetrahedral shape. Different types of hybridization, such as sp, sp2, and sp3, correspond to specific molecular geometries (linear, trigonal planar, and tetrahedral), influencing the bond angles observed in molecules. Hybrid orbitals provide additional directionality to sigma bonds, accurately explaining molecular geometries.
Comparison with MO theory
Valence bond theory complements molecular orbital theory, which does not adhere to the valence bond idea that electron pairs are localized between two specific atoms in a molecule but that they are distributed in sets of molecular orbitals which can extend over the entire molecule. Although both theories describe chemical bonding, molecular orbital theory generally offers a clearer and more reliable framework for predicting magnetic and ionization properties. In particular, MO theory can effectively account for paramagnetism arising from unpaired electrons, whereas VBT struggles. Valence bond theory views aromatic properties of molecules as due to spin coupling of the orbitals. This is essentially still the old idea of resonance between Friedrich August Kekulé von Stradonitz and James Dewar structures. In contrast, molecular orbital theory views aromaticity as delocalization of the -electrons. Valence bond treatments are restricted to relatively small molecules, largely due to the lack of orthogonality between valence bond orbitals and between valence bond structures, while molecular orbitals are orthogonal. Additionally, valence bond theory cannot explain electronic transitions and spectroscopic properties as effectively as MO theory. Furthermore, while VBT employs hybridization to explain bonding, it can oversimplify complex bonding situations, limiting its applicability in more intricate molecular geometries such as transition metal compounds. On the other hand, valence bond theory provides a much more accurate picture of the reorganization of electronic charge that takes place when bonds are broken and formed during the course of a chemical reaction. In particular, valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple molecular orbital theory predicts dissociation into a mixture of atoms and ions. For example, the molecular orbital function for dihydrogen is an equal mixture of the covalent and ionic valence bond structures and so predicts incorrectly that the molecule would dissociate into an equal mixture of hydrogen atoms and hydrogen positive and negative ions.
Computational approaches
Modern valence bond theory replaces the overlapping atomic orbitals by overlapping valence bond orbitals that are expanded over a large number of basis functions, either centered each on one atom to give a classical valence bond picture, or centered on all atoms in the molecule. The resulting energies are more competitive with energies from calculations where electron correlation is introduced based on a Hartree–Fock reference wavefunction. The most recent text is by Shaik and Hiberty.
Applications
An important aspect of the valence bond theory is the condition of maximum overlap, which leads to the formation of the strongest possible bonds. This theory is used to explain the covalent bond formation in many molecules.
For example, in the case of the F2 molecule, the F−F bond is formed by the overlap of pz orbitals of the two F atoms, each containing an unpaired electron. Since the nature of the overlapping orbitals are different in H2 and F2 molecules, the bond strength and bond lengths differ between H2 and F2 molecules.
In methane (CH4), the carbon atom undergoes sp3 hybridization, allowing it to form four equivalent sigma bonds with hydrogen atoms, resulting in a tetrahedral geometry. Hybridization also explains the equal C-H bond strengths.
In an HF molecule the covalent bond is formed by the overlap of the 1s orbital of H and the 2pz orbital of F, each containing an unpaired electron. Mutual sharing of electrons between H and F results in a covalent bond in HF.
| Physical sciences | Chemical bonds | null |
736449 | https://en.wikipedia.org/wiki/Macram%C3%A9 | Macramé | Macramé is a form of textile produced using knotting (rather than weaving or knitting) techniques.
The primary knots of macramé are the square (or reef knot) and forms of "hitching": various combinations of half hitches. It was long crafted by sailors, especially in elaborate or ornamental knotting forms, to cover anything from knife handles to bottles to parts of ships.
Cavandoli macramé is one variety that is used to form geometric and free-form patterns like weaving. The Cavandoli style is done mainly in a single knot, the double half-hitch knot. Reverse half hitches are sometimes used to maintain balance when working the left and right halves of a balanced piece.
Leather or fabric belts are another accessory often created via macramé techniques. Most friendship bracelets exchanged among schoolchildren and teens are created using this method. Vendors at theme parks, malls, seasonal fairs, and other public places may sell macramé jewelry or decoration as well.
History
One of the earliest recorded uses of macramé-style knots as decoration appeared in the carvings of the Babylonians and Assyrians. Fringe-like plaiting and braiding adorned the costumes of the time and were captured in their stone statuary.
Arab weavers called this kind of decorated cloth embroidery migramah (مِقْرَمة). It involved knotting excess thread along the edges of hand-loomed fabrics such as towels, shawls, and veils into decorative fringes. The word macramé could be derived from the Andalusian-Arabic version makramīya (), believed to mean "striped towel", "ornamental fringe" or "embroidered veil". Another school of thought indicates that it came to Europe from Arabic but via the Turkish version makrama, "napkin" or "towel". The decorative fringes also helped to keep flies off camels and horses in northern Africa.
The Muslim conquest of the Iberian Peninsula took the craft to Spain, then Italy, especially in the region of Liguria, then it spread through Europe. In England, it was introduced at the court of Mary II in the late 17th century. Queen Mary taught it to her ladies-in-waiting.
Macramé was most popular in the Victorian era. It adorned most homes in items such as tablecloths, bedspreads and curtains. The popular Sylvia's Book of Macramé Lace (1882) showed how "to work rich trimmings for black and coloured costumes, both for home wear, garden parties, seaside ramblings, and balls—fairylike adornments for household and underlinens ...".
It was a specialty in Genoa, and was popular in the 19th century. There, "Its roots were in a 16th-century technique of knotting lace known as punto a groppo"
Sailors made macramé objects while not busy at sea, and sold or bartered them when they landed. Nineteenth-century British and American sailors made hammocks, bell fringes, and belts from macramé. They called the process "square knotting" after the knot they used most often. Sailors also called macramé "McNamara's lace".
Macramé's popularity faded, but resurged in the 1970s for making wall hangings, clothing accessories, small jean shorts, bedspreads, tablecloths, draperies, plant hangers and other furnishings. Macramé jewelry became popular in America. Using mainly square knots and granny knots, this jewelry often features handmade glass beads and natural elements such as bone and shell. Necklaces, anklets and bracelets have become popular forms of macramé jewelry. By the early 1980s, macramé again began to fall out of fashion, only to be revived by millennials.
Materials and process
Materials used in macramé include cords made of cotton twine, linen, hemp, jute, leather or yarn. Cords are identified by construction, such as a 3-ply cord, made of three lengths of fibre twisted together. Jewelry is often made in combination of both the knots and various beads (of glass, wood, and so on), pendants or shells. Sometimes 'found' focal points are used for necklaces, such as rings or gemstones, either wire-wrapped to allow for securing or captured in a net-like array of intertwining overhand knots. A knotting board is often used to mount the cords for macramé work. Cords may be held in place using a C-clamp, straight pins, T-pins, U-pins, or upholstery pins.
For larger decorative pieces, such as wall hangings or window coverings, a work of macramé might be started out on a wooden or metal dowel, allowing for a spread of dozens of cords that are easy to manipulate. For smaller projects, push-pin boards are available specifically for macramé, although a simple corkboard works adequately. Many craft stores offer beginners' kits, work boards, beads and materials ranging in price for the casual hobbyist or ambitious crafter.
| Technology | Other techniques | null |
736919 | https://en.wikipedia.org/wiki/Camino%20de%20Santiago | Camino de Santiago | The Camino de Santiago (, ; ), or in English the Way of St. James, is a network of pilgrims' ways or pilgrimages leading to the shrine of the apostle James in the cathedral of Santiago de Compostela in Galicia in northwestern Spain, where tradition holds that the remains of the apostle are buried.
As Pope Benedict XVI said, "It is a way sown with so many demonstrations of fervour, repentance, hospitality, art and culture which speak to us eloquently of the spiritual roots of the Old Continent." Many still follow its routes as a form of spiritual path or retreat for their spiritual growth. It is also popular with hikers, cyclists, and organized tour groups.
Created and established after the discovery of the relics of Saint James the Great at the beginning of the 9th century, the Way of St. James became a major pilgrimage route of medieval Christianity from the 10th century onwards. But it was only after the end of the Granada War in 1492, under the reign of the Catholic Monarchs Ferdinand II of Aragon and Isabella I of Castile, that Pope Alexander VI officially declared the Camino de Santiago to be one of the "three great pilgrimages of Christendom", along with Jerusalem and the Via Francigena to Rome.
In 1987, the Camino, which encompasses several routes in Spain, France, and Portugal, was declared the first Cultural Route of the Council of Europe. Since 2013, the Camino has attracted more than 200,000 pilgrims each year, with an annual growth rate of more than 10 percent. Pilgrims come mainly on foot and often from nearby cities, requiring several days of walking to reach Santiago. The French Way gathers two-thirds of the walkers, but other minor routes are experiencing a growth in popularity. The French Way and the Northern routes in Spain were inscribed on the UNESCO World Heritage List, followed by the routes in France in 1998, because of their historical significance for Christianity as a major pilgrimage route and their testimony to the exchange of ideas and cultures across the routes.
Major Christian pilgrimage route
The Way of St. James was one of the most important Christian pilgrimages during the later Middle Ages, and a pilgrimage route on which a plenary indulgence could be earned; other major pilgrimage routes include the Via Francigena to Rome and the pilgrimage to Jerusalem. Legend holds that St James's remains were carried by boat from Jerusalem to northern Spain, where he was buried in what is now the city of Santiago de Compostela (according to Spanish legends, Saint James had spent time preaching the gospel in Spain, but returned to Judaea upon seeing a vision of the Virgin Mary on the bank of the Ebro River).
Pilgrims on the Way can take one of dozens of pilgrimage routes to Santiago de Compostela. Traditionally, as with most pilgrimages, the Way of Saint James begins at one's home and ends at the pilgrimage site. However, a few of the routes are considered main ones. During the Middle Ages, the route was highly travelled. However, the Black Death, the Protestant Reformation, and political unrest in 16th century Europe led to its decline.
Whenever St James's Day (25 July) falls on a Sunday, the cathedral declares a Holy or Jubilee Year. Depending on leap years, Holy Years occur in 5-, 6-, and 11-year intervals. The most recent were 1993, 1999, 2004, 2010 and 2021. The next will be in 2025.
History
Pre-Christian history
The main pilgrimage route to Santiago follows an earlier Roman trade route, which continues to the Atlantic coast of Galicia, ending at Cape Finisterre. Although it is known today that Cape Finisterre, Spain's westernmost point, is not the westernmost point of Europe (Cabo da Roca in Portugal is farther west), the fact that the Romans called it Finisterrae (literally the end of the world or Land's End in Latin) indicates that they viewed it as such. At night, the Milky Way overhead seems to point the way, so the route acquired the nickname "Voie lactée" – the Milky Way in French.
Scallop symbol
The scallop shell, often found on the shores in Galicia, has long been the symbol of the Camino de Santiago. Over the centuries the scallop shell has taken on a variety of meanings, metaphorical, practical, and mythical, even if its relevance may have actually derived from the desire of pilgrims to take home a souvenir.
One myth says that after James's death, his body was transported by a ship piloted by an angel, back to the Iberian Peninsula to be buried in what is now Libredón. As the ship approached land, the wedding of the daughter of Queen Lupa was taking place on shore. The young groom was on horseback, and, upon seeing the ship's approach, his horse got spooked, and horse and rider plunged into the sea. Through miraculous intervention, the horse and rider emerged from the water alive, covered in seashells.
From its connection to the Camino, the scallop shell came to represent pilgrimage, both to a specific shrine as well as to heaven, recalling Hebrews 11:13, identifying that Christians "are pilgrims and strangers on the earth". The scallop shell symbol is used as a waymarker on the Camino, and is commonly seen on pilgrims themselves, who are thereby identified as pilgrims. During the medieval period, the shell was more a proof of completion than a symbol worn during the pilgrimage. The pilgrim's staff is a walking stick used by some pilgrims on the way to the shrine of Santiago de Compostela in Spain. Generally, the stick has a hook so that something may be hung from it; it may have a crosspiece. The usual form of representation is with a hook, but in some the hook is absent. The pilgrim's staff is represented under different forms and is referred to using different names, e.g. a pilgrim's crutch, a crutch-staff. The crutch, perhaps, should be represented with the transverse piece on the top of the staff (like the letter "T") instead of across it.
Medieval route history
The earliest records of visits paid to the shrine at Santiago de Compostela date from the 9th century, in the time of the Kingdom of Asturias and Galicia. The pilgrimage to the shrine became the most renowned medieval pilgrimage, and it became customary for those who returned from Compostela to carry back with them a Galician scallop shell as proof of their completion of the journey. This practice gradually led to the scallop shell becoming the badge of a pilgrim.
The earliest recorded pilgrims from beyond the Pyrenees visited the shrine in the middle of the 11th century, but it seems that it was not until a century later that large numbers of pilgrims from abroad were regularly journeying there. The earliest records of pilgrims that arrived from England belong to the period between 1092 and 1105. However, by the early 12th century the pilgrimage had become a highly organized affair.
One of the great proponents of the pilgrimage in the 12th century was Pope Callixtus II, who started the Compostelan Holy Years.
The daily needs of pilgrims on their way to and from Compostela were met by a series of hospitals. Indeed, these institutions contributed to the development of the modern concept of 'hospital'. Some Spanish towns still bear the name, such as Hospital de Órbigo. The hospitals were often staffed by Catholic orders and under royal protection. Donations were encouraged but many poorer pilgrims had few clothes and poor health often barely getting to the next hospital. Due to this, María Ramírez de Medrano founded one of the earliest hospitals of San Juan de Acre in Navarrete and a commandery for the protection of pilgrims on the Compostela route.
Romanesque architecture, a new genre of ecclesiastical architecture, was designed with massive archways to cope with huge crowds of the devout.
There was also the sale of the now-familiar paraphernalia of tourism, such as badges and souvenirs. Pilgrims often prayed to Saint Roch whose numerous depictions with the Cross of St James can still be seen along the Way. On the Camino, the cross is often seen with a Pilgrim's scallop to mark the way of the pilgrimage.
The pilgrimage route to Santiago de Compostela was made possible by the protection and freedom provided by the Kingdom of France, from which the majority of pilgrims originated. Enterprising French (including Gascons and other peoples not under the French crown) settled in towns along the pilgrimage routes, where their names appear in the archives. The pilgrims were tended by people like Domingo de la Calzada, who was later recognized as a saint.
Pilgrims walked the Way of St. James, often for months and occasionally years at a time, to arrive at the great church in the main square of Compostela and pay homage to St James. Many arrived with very little due to illness or robbery or both. Traditionally pilgrims lay their hands on the pillar just inside the doorway of the cathedral, and so many now have done this it has visibly worn away the stone.
The popular Spanish name for the astronomical Milky Way is El Camino de Santiago. According to a common medieval legend, the Milky Way was formed from the dust raised by travelling pilgrims.
First official guide book
The official guide in those times was the Codex Calixtinus. Published around 1140, the 5th book of the codex is still considered the definitive source for many modern guidebooks. Four pilgrimage routes listed in the codex originate in France and converge at Puente la Reina. From there, a well-defined route crosses northern Spain, linking Burgos, Carrión de los Condes, Sahagún, León, Astorga, and Compostela.
Legends of the discovery of the Tomb of St. James
Another legend states that when a hermit saw a bright star shining over a hillside near San Fiz de Solovio, he informed the bishop of Iria Flavia, who found a grave at the site with three bodies inside, one of which, he asserted, was that of St James. Subsequently, the location was called "the field of the star" (Campus Stellae, corrupted to "Compostela").
Another origin myth mentioned in Book IV of the Book of Saint James relates how the saint appeared in a dream to Charlemagne, urging him to liberate his tomb from the Moors and showing him the direction to follow by the route of the Milky Way.
Pilgrimage as penance
The Church employed (and employs) rituals (the sacrament of confession) that can lead to the imposition by a priest of penance, through which the sinner atones for his or her sins. Pilgrimages were deemed to be a suitable form of expiation for sin and long pilgrimages would be imposed as penance for very serious sins. As noted in the Catholic Encyclopedia:
Pilgrimages could also be imposed as judicial punishment for crime, a practice that is still occasionally used today. For example, a tradition in Flanders persists of pardoning and releasing one prisoner every year under the condition that, accompanied by a guard, the prisoner walks to Santiago wearing a heavy backpack.
Enlightenment era
During the American Revolution, John Adams (who would become the second President of the United States) was ordered by Congress to go to Paris to obtain funds for the cause. His ship started leaking and he disembarked with his two sons at Finisterre in 1779. From there, he proceeded to follow the Way of St. James in the reverse direction of the pilgrims' route, in order to get to Paris overland. He did not stop to visit Santiago, which he later regretted. In his autobiography, Adams described the customs and lodgings afforded to St James's pilgrims in the 18th century and he recounted the legend as it was told to him:
Modern-day pilgrimage
Although it is commonly believed that the pilgrimage to Santiago has continued without interruption since the Middle Ages, few modern pilgrimages antedate the 1957 publication of Irish Hispanist and traveller Walter Starkie's The Road to Santiago. The revival of the pilgrimage was supported by the Spanish government of Francisco Franco, much inclined to promote Spain's Catholic history. "It has been only recently (1990s) that the pilgrimage to Santiago regained the popularity it had in the Middle Ages."
Since then, hundreds of thousands (over 300,000 in 2017) of Christian pilgrims and many others set out each year from their homes, or from popular starting points across Europe, to make their way to Santiago de Compostela. Most travel by foot, some by bicycle, and some even travel as their medieval counterparts did, on horseback or by donkey. In addition to those undertaking a religious pilgrimage, many are hikers who walk the route for travel or sport, along with an interest in exploring their own relationship with themselves, other people, nature, and what they perceive as being sacred. Also, many consider the experience a spiritual retreat from modern life.
Routes
Here, only a few routes are named. For a complete list of all the routes (traditional and less so), see: Camino de Santiago (route descriptions).
The Camino Francés, or French Way, is the most popular. The Via Regia is the last portion of the Camino Francés. Historically, because of the Codex Calixtinus, most pilgrims came from France: typically from Arles, Le Puy, Paris, and Vézelay; some from Saint Gilles. Cluny, site of the celebrated medieval abbey, was another important rallying point for pilgrims and, in 2002, it was integrated into the official European pilgrimage route linking Vézelay and Le Puy.
Most Spanish consider the French border in the Pyrenees the natural starting point. By far the most common, modern starting point on the Camino Francés is Saint-Jean-Pied-de-Port, on the French side of the Pyrenees, with Roncesvalles on the Spanish side also being popular. The distance from Roncesvalles to Santiago de Compostela through León is about .
The Camino Primitivo, or Original Way, is the oldest route to Santiago de Compostela, first taken in the 9th century, which begins in Oviedo. It is 320 km (199 miles) long.
Camino Portugués, or Portuguese Way, is the second-most-popular route, starting at the cathedral in Lisbon (for a total of about 610 km) or at the cathedral in Porto in the north of Portugal (for a total of about 227 km), and crossing into Galicia at Valença.
The Camino del Norte, or Northern Way, is also less travelled and starts in the Basque city of Irun on the border with France, or sometimes in San Sebastián. It is a less popular route because of its changes in elevation, whereas the Camino Frances is mostly flat. The route follows the coast along the Bay of Biscay until it nears Santiago. Though it does not pass through as many historic points of interest as the Camino Frances, it has cooler summer weather. The route is believed to have been first used by pilgrims to avoid traveling through the territories occupied by the Muslims in the Middle Ages. From Irun the path is 817 km (508 miles) long.
The Central European Camino was revived after the Fall of the Berlin Wall. Medieval routes, Camino Baltico and the Via Regia in Poland pass through present-day Poland reach as far north as the Baltic states, taking in Vilnius, and Eastwards to present-day Ukraine and take in Lviv, Sandomierz and Kraków.
Accommodation
In Spain, France, and Portugal, pilgrims' hostels with beds in dormitories provide overnight accommodation for pilgrims who hold a credencial (see below). In Spain this type of accommodation is called a refugio or albergue, both of which are similar to youth hostels or hostelries in the French system of gîtes d'étape.
Hostels may be run by a local parish, the local council, private owners, or pilgrims' associations. Occasionally, these refugios are located in monasteries, such as the one in the Monastery of San Xulián de Samos that is run by monks, and the one in Santiago de Compostela.
The final hostel on the route is the famous Hostal de los Reyes Católicos, which lies in the Plaza del Obradoiro across the Cathedral. It was originally constructed as hospice and hospital for pilgrims by Queen Isabella I of Castile and King Ferdinand II of Aragon, the Catholic Monarchs. Today it is a luxury 5-star Parador hotel, which still provides free services to a limited number of pilgrims daily.
Credencial or pilgrim's passport
Most pilgrims purchase and carry a document called the credencial, which gives access to overnight accommodation along the route. Also known as the "pilgrim's passport", the credencial is stamped with the official St. James stamp of each town or refugio at which the pilgrim has stayed. It provides pilgrims with a record of where they ate or slept and serves as proof to the Pilgrim's Office in Santiago that the journey was accomplished according to an official route and thus that the pilgrim qualifies to receive a compostela (certificate of completion of the pilgrimage).
Compostela
The compostela is a certificate of accomplishment given to pilgrims on completing the Way. To earn the compostela one needs to walk a minimum of 100 km or cycle at least 200 km. In practice, for walkers, the closest convenient point to start is Sarria, as it has good bus and rail connections to other places in Spain. Pilgrims arriving in Santiago de Compostela who have walked at least the last , or cycled to get there (as indicated on their credencial), and who state that their motivation was at least partially religious, are eligible for the compostela from the Pilgrim's Office in Santiago.
The compostela has been indulgenced since the Early Middle Ages and remains so to this day, during Holy Years. The English translation reads:
The simpler certificate of completion in Spanish for those with non-religious motivation reads:
English translation:
The Pilgrim's Office gives more than 100,000 compostelas each year to pilgrims from more than 100 countries. However, the requirements to earn a compostela ensure that not everyone who walks on the Camino receives one. The requirements for receiving a compostela are:
1) make the Pilgrimage for religious/spiritual reasons or at least have an attitude of search, 2) do the last 100 km on foot or horseback or the last 200 km by bicycle. 3) collect a certain number of stamps on a credencial.
Pilgrim's Mass
A Pilgrim's Mass is held in the Cathedral of Santiago de Compostela each day at 12:00 and 19:30. Pilgrims who received the compostela the day before have their countries of origin and the starting point of their pilgrimage announced at the Mass. The Botafumeiro, one of the largest censers in the world, is operated during certain Solemnities and on every Friday, except Good Friday, at 19:30. Priests administer the Sacrament of Penance, or confession, in many languages. In the Holy Year of 2010 the Pilgrim's Mass was exceptionally held four times a day, at 10:00, 12:00, 18:00, and 19:30, catering for the greater number of pilgrims arriving in the Holy Year.
Pilgrimage as tourism
The Xunta de Galicia (Galicia's regional government) promotes the Way as a tourist activity, particularly in Holy Compostela Years (when 25 July falls on a Sunday). Following Galicia's investment and advertising campaign for the Holy Year of 1993, the number of pilgrims completing the route has been steadily rising. The most recent Holy Year occurred in 2021, 11 years after the last Holy Year of 2010. More than 272,000 pilgrims made the trip during the course of 2010. The next Holy Year pilgrimage will occur in 2027. 446,000 pilgrims walked the route in 2023.
In film, television & literature
(Chronological)
The pilgrimage is central to the plot of the film The Milky Way (1969), directed by surrealist Luis Buñuel. It is intended to critique the Catholic church, as the modern pilgrims encounter various manifestations of Catholic dogma and heresy.
In Part Four of the novel The Pillars of the Earth (1989), one of the main characters, Aliena, travels the Camino in search of her lost love, Jack, who is also the father to her child. She travels the route from England through France (specifically Tours and Saint Denis) and Spain, eventually reaching Santiago and continuing on to Toledo.
The Naked Pilgrim (2003) documents the journey of art critic and journalist Brian Sewell to Santiago de Compostela for the UK's Channel Five. Travelling by car along the French route, he visited many towns and cities on the way including Paris, Chartres, Roncesvalles, Burgos, León and Frómista. Sewell, a lapsed Catholic, was moved by the stories of other pilgrims and by the sights he saw. The series climaxed with Sewell's emotional response to the Mass at Compostela.
The Way of St. James was the central feature of the film Saint Jacques... La Mecque (2005) directed by Coline Serreau.
In The Way (2010), written and directed by Emilio Estevez, Martin Sheen learns that his son (Estevez) has died early along the route and takes up the pilgrimage in order to complete it on the son's behalf. The film was presented at the Toronto International Film Festival in September 2010 and premiered in Santiago in November 2010.
On his PBS travel Europe television series, Rick Steves covers Northern Spain and the Camino de Santiago in series 6.
In 2013, Simon Reeve presented the "Pilgrimage" series on BBC2, in which he followed various pilgrimage routes across Europe, including the Camino de Santiago in episode 2.
In 2014, Lydia B Smith and Future Educational Films released Walking the Camino: Six Ways to Santiago in theatres across the U.S. and Canada. The film features the accounts and perspectives of six pilgrims as they navigate their respective journeys from France to Santiago de Compostela. In 2015, it was distributed across the World, playing theatres throughout Europe, Australia, and New Zealand. It recently aired on NPTV and continues to be featured in festivals relating to the Spirituality, Mind Body, Travel, and Adventure.
In the 2017 movie The Trip to Spain, the Camino de Santiago is mentioned as Rob Brydon quizzes Steve Coogan about what the Camino is and proceeds to explain what it is with a brief history of it.
In 2018, series one of BBC Two's Pilgrimage followed this pilgrimage.
Gallery
Selected literature
(Alphabetical by author's surname)
| Technology | Ground transportation networks | null |
737029 | https://en.wikipedia.org/wiki/Cloudburst | Cloudburst | A cloudburst is an enormous amount of precipitation in a short period of time, sometimes accompanied by hail and thunder, which is capable of creating flood conditions. Cloudbursts can quickly dump large amounts of water, e.g. 25 mm of the precipitation corresponds to 25,000 metric tons per square kilometre (1 inch corresponds to 72,300 short tons over one square mile). However, cloudbursts are infrequent as they occur only via orographic lift or occasionally when a warm air parcel mixes with cooler air, resulting in sudden condensation. At times, a large amount of runoff from higher elevations is mistakenly conflated with a cloudburst. The term "cloudburst" arose from the notion that clouds were akin to water balloons and could burst, resulting in rapid precipitation. Though this idea has since been disproven, the term remains in use.
Properties
Rainfall rate equal to or greater than per hour is a cloudburst. However, different definitions are used, e.g. the Swedish weather service SMHI defines the corresponding Swedish term "skyfall" as per minute for short bursts and per hour for longer rainfalls. The associated convective cloud can extend up to a height of above the ground.
During a cloudburst, more than of rain may fall in a few minutes. The results of cloudbursts can be disastrous. Cloudbursts are also responsible for flash flood creation.
Rapid precipitation from cumulonimbus clouds is possible due to the Langmuir precipitation process in which large droplets can grow rapidly by coagulating with smaller droplets which fall down slowly. It is not essential that cloudbursts occur only when a cloud clashes with a solid body like a mountain, they can also occur when hot water vapor mingles into the cold resulting in sudden condensation.
Detection and forecasting
While satellites are extensively useful in detecting large-scale weather systems and rainfall, the resolution of the precipitation radars of these satellites are usually smaller than the area of cloudbursts, and hence they go undetected. Weather forecast models also face a similar challenge in simulating the clouds at a high resolution. The skillful forecasting of rainfall in hilly regions remains challenging due to the uncertainties in the interaction between the moisture convergence and the hilly terrain, the cloud microphysics, and the heating-cooling mechanisms at different atmospheric levels.
Record cloudbursts
Locations
Asia
In the Indian subcontinent, a cloudburst usually occurs when a monsoon cloud drifts northwards, from the Bay of Bengal or Arabian Sea across the plains, then onto the Himalayas and bursts, bringing rainfall as high as 75 millimetres per hour.
Bangladesh
In September 2004, mm of rain was recorded in Dhaka in 24 hours.
On June 11, 2007 mm of rain fell in 24 hours in Chittagong.
On July 29, 2009, a record breaking of rain was recorded in Dhaka, in 24 hours, previously of rain was recorded on July 13, 1956.
On September 27, 2020, record breaking of rain fell just in 12 hours in the city of Rangpur in Northern Bangladesh, producing widespread flooding across the city.
India
On September 28, 1908 – A cloudburst resulted in a flood where level of the Musi River increased up to 3.4 meters . About 15,000 people died and around 80,000 houses were destroyed along the banks of the river.
In July 1970, the Alaknanda valley witnessed a major flood, This was attributed to a cloudburst on the night of 20 July 1970 on the southern mountain front in the Alaknanda valley (between Joshimath and Chamoli). According to an estimate, floods transported about 15.9 × 106 tonnes of sediment within a day. The catastrophe was so large that it wiped out the leftover of the 1894 Gohna lake. In addition, a roadside settlement between Pipalkoti and Helong called Belakuchi in the Alaknanda valley was washed away along with a convoy of 30 buses, by the roaring Alaknanda river. However, around 400 pilgrims route to Badrinath, were saved due to the alertness of a police constable who guided them to run uphill. Alaknanda River in Uttarakhand and its entire river basin, from Hanumanchatti near the pilgrimage town of Badrinath to Haridwar was affected.
On August 15, 1997, 115 people were killed when a cloudburst occurred and trail of death was all that was left behind in Chirgaon in Shimla district, Himachal Pradesh.
On August 17, 1998, a massive landslide following heavy rain and a cloudburst at Malpa village killed 250 people, including 60 Kailash Mansarovar pilgrims in Kali valley of the Pithoragarh district, Uttarakhand. Among the dead was Odissi dancer Protima Bedi.
On July 16, 2003, about 40 people were killed in flash floods caused by a cloudburst at Shilagarh in Gursa area of Kullu district, Himachal Pradesh.
On July 6, 2004, at least 17 people were killed and 28 injured when three vehicles were swept into the Alaknanda River by heavy landslides triggered by a cloudburst that left nearly 5,000 pilgrims stranded near Badrinath shrine area in Chamoli district, Uttarakhand.
On 26 July 2005, a cloudburst caused approximately of rainfall in Mumbai. over a span of eight to ten hours; the deluge completely paralysed India's largest city and financial centre, leaving over 1,000 dead. Half of the flooding was caused due to the blockage sewers in many parts of Mumbai.
On August 14, 2007, 52 people were confirmed dead when a severe cloudburst occurred in Bhavi village in Ganvi, Himachal Pradesh.
On August 7, 2009, 38 people were killed in a landslide resulting from a cloudburst in Nachni area near Munsiyari in Pithoragarh district of Uttarakhand.
On August 6, 2010, in Leh, a series of cloudbursts left over 1,000 people dead (updated number) and over 400 injured in the frontier Leh town of Ladakh region.
On September 15, 2010, a cloudburst in Almora in Uttarakhand submerged two villages, one of them being Balta, in which save for a few people, the entire village drowned. Almora was declared as a town suffering from the brunt of cloudburst by the Uttarakhand authorities.
On September 29, 2010, a cloudburst in NDA (National Defence Academy), Khadakwasla, Pune, in Maharashtra left many injured and hundreds of vehicles and buildings damaged due to the consequent flash flood.
Again on October 4, 2010, a cloudburst in Pashan, Pune, in Maharashtra left 4 dead, many injured and hundreds of vehicles and buildings damaged; the record books registered the highest rainfall in intensity and quantity in Pune city, then about 118 years old (record of 149.1 mm in 24 hours) of October 24, 1892. In the history of IT hub Pune, for the first time this flash flood forced locals to remain in their vehicles, offices and what ever available shelter in the accompanying traffic jam.
On October 4, 2010, a cloudburst in Pashan, Pune may have been the world's first predicted cloudburst. Since 2:30 pm weather scientist Kirankumar Johare in the city frantically sent out SMSs to the higher authorities warning of an impending cloudburst over the Pashan area. Even after taking the necessary precautions, 4 people died including one young scientist.
On June 9, 2011, near Jammu, a cloudburst left four people dead and over several injured in Doda-Batote highway, 135 km from Jammu. Two restaurants and many shops were washed away
On 20 July 2011, a cloudburst in upper Manali, 18 km from Manali town in Himachal Pradesh state left 2 dead and 22 missing.
On September 15, 2011, a cloudburst was reported in the Palam area of the National Capital Territory of Delhi. The Indira Gandhi International Airport's Terminal-3 was flooded with water at the arrival due to the immense downpour. Even though no deaths occurred, the hours-long rainfall was enough to enter the record books as the highest rainfall in the city since 1959.
On September 14, 2013, there was a cloudburst in Ukhimath in the Rudraprayag district, Uttarakhand killing 39 people.
On June 15, 2013, a cloudburst was reported in Kedarnath and Rambara region of Rudraprayag district, Uttarakhand. Over 1,000 killed to date, it is feared that the death toll may rise to 5,000. Debris is still being cleared and thousands are still missing as of June 30, 2013. It left approximately 84,000 people stranded for several days. The Indian Army and its Central Command launched one of the largest and most extensive human rescue missions in its history. Spread over 40,000 square kilometres, 45 helicopters were deployed to rescue the stranded. According to a news report this incident was falsely linked with cloud burst, rather it was caused due to disturbance in the two glaciers near Kedarnath.
On July 30, 2014, a landslide occurred in the small Indian village of Malin, located in Ambegaon taluka in Pune district of India. The landslide, which hit the village early in the morning while its residents were asleep, killed at least 20 people. In addition to those dead, over 160 people were believed to have been buried in the landslide in 44 separate houses, though more recent estimates place the figure at about seventy
On July 31, 2014, a cloudburst was reported in Tehri Garhwal district of Uttrakhand. At least 4 people were reported dead.
On September 6, 2014, there was a cloudburst in Kashmir valley killing more than 200 people. Center for Science and Environment (CSE) mentioned heavy and unchecked development aggravated the development in the region. Over 1,84,000 people were rescued after heavy rains have large part of the State submerged.
On December 2, 2015, the city of Chennai recorded 494 mm rains eventually causing 2015 South India floods. The floods saw 400+ casualties around Tamil Nadu.
On May 8, 2016, Continuous rainfall occurred in Tharali and Karnaprayag in Chamoli district, Uttarakhand resulting in damage, but no casualties.
On the night of July 5, 2017 a cloudburst was reported in Haridwar, Uttarakhand. Some local stations recorded 102 mm rain in an hour. Surprisingly no one was killed and no significant damage occurred.
On July 20, 2017, a cloudburst caused huge damage at Thathri town of Doda district killing more than 6 people.
On May 4, 2018, a cloudburst had occurred above Belagavi, Karnataka. Weather stations in the area reported 95mm rain in an hour. No significant casualties or damage had occurred.
On May 12, 2021, a cloudburst was reported from Tehri, Chamoli districts in Uttarakhand. No significant casualties or damage had occurred.
On July 28, 2021, Cloudburst hits Hunzar hamlet in Dachhan area of Kishtwar district resulting into death of 26 persons and 17 injured.
On October 20, 2021, a cloudburst occurred above Pethanaickenpalayam town of Salem district, Tamil Nadu. This resulted in 213 mm rain in a single day. Ponds in the area filled up and so did the Thennakudipalayam lake. The Vasishta Nadi became flooded, making the Attur check dam to brim with water. No damages were reported.
On 8 July 2022, Cloudburst occurred at Pahalgam en route to Amarnath cave shrine.
On December 18, 2023, the District of Thoothukudi recorded 946 mm and District of Tirunelveli Tirunelveli recorded 636 mm rains eventually causing 2023 Tamilnadu floods. The floods saw 400+ casualties around Southern districts of Tamil Nadu.Many places of Thoothukudi district Tiruchendur, Sathankulam, Srivaikuntam recorded more than 700 mm rain in 24 hrs . This is the Highest rain fall occurred in plain region without any cyclone formation.
On 31 July 2024, Flash floods and cloudbursts have caused huge damage in several areas of Uttarakhand.
Pakistan
On July 1, 1977, the city of Karachi was flooded when of rain was recorded in 24 hours.
On July 23, 2001 of rainfall was recorded in 10 hours in Islamabad. It was the heaviest rainfall in 24 hours in Islamabad and at any locality in Pakistan during the past 100 years.
On July 23, 2001 of rainfall was recorded in 10 hours in Rawalpindi.
On July 18, 2009, of rainfall occurred in just 4 hours in Karachi, which caused massive flooding in the metropolis city.
On July 29, 2010, a record breaking of rain was recorded in Risalpur in 24 hours.
On July 29, 2010, a record breaking of rain was recorded in Peshawar in 24 hours.
On August 9, 2011 of rainfall was recorded in 3 hours in Islamabad flooded main streets.
On August 10, 2011, a record breaking of rainfall was recorded in 24 hours in Mithi, Sindh Pakistan.
On August 11, 2011, a record breaking of rainfall was recorded in 24 hours in Tando Ghulam Ali, Sindh Pakistan.
On September 7, 2011, a record breaking of rainfall was recorded in 24 hours in Diplo, Sindh Pakistan.
On September 9, 2012, Jacobabad received the heaviest rainfall in the last 100 years, and recorded in 24 hours, as a result over 150 houses collapsed.
On July 28, 2021, cloud burst caused flooding in several areas of Islamabad.
Europe
Denmark
On 2 July 2011, a cloudburst hit parts of Zealand and the Greater Copenhagen area of Denmark. This resulted in the greatest recorded rainfall in 24 hours in the past 55 years. It caused an estimated DKK 6 billion in damage, notably including structural failures at the 17th-century fortress, Kastellet.
North America
Colorado Piedmont
The uplands adjacent to the Front Range of Colorado and the streams which drain the Front Range are subject to occasional cloudbursts and flash floods. This weather pattern is associated with upslope winds bringing moisture northwestward from the Gulf of Mexico.
| Physical sciences | Precipitation | Earth science |
737584 | https://en.wikipedia.org/wiki/Toyota%20Corolla | Toyota Corolla | The is a series of compact cars (formerly subcompact) manufactured and marketed globally by the Japanese automaker Toyota Motor Corporation. Introduced in 1966, the Corolla was the best-selling car worldwide by 1974 and has been one of the best-selling cars in the world since then. In 1997, the Corolla became the best-selling nameplate in the world, surpassing the Volkswagen Beetle. Toyota reached the milestone of 50 million Corollas sold over twelve generations in 2021.
The name Corolla is part of Toyota's naming tradition of using names derived from the Toyota Crown for sedans, with "corolla" Latin for "small crown". The Corolla has always been exclusive in Japan to Toyota Corolla Store locations, and manufactured in Japan with a twin, called the Toyota Sprinter until 2000. From 2006 to 2018 in Japan and much of the world, and from 2018 to 2020 in Taiwan, the hatchback companion had been called the Toyota Auris.
Early models were mostly rear-wheel drive, while later models have been front-wheel drive. Four-wheel drive versions have also been produced, and it has undergone several major redesigns. The Corolla's traditional competitors have been the Nissan Sunny, introduced the same year as the Corolla in Japan and the later Nissan Sentra, Subaru Leone, Honda Civic and Mitsubishi Lancer. The Corolla's chassis designation code is "E", as described in Toyota's chassis and engine codes.
Production locations
Corollas are manufactured in Japan at the original Takaoka plant built in 1966. Various production facilities have been built in Brazil, (Indaiatuba, São Paulo), Canada (Cambridge, Ontario), China (Tianjin), Pakistan (Karachi), South Africa (Durban), Taiwan, Thailand, Vietnam, Turkey (Sakarya), and the United Kingdom (Derbyshire). Production or assembly has previously been carried out in Australia (Dandenong and Altona), India (Bangalore), Indonesia (Jakarta), Malaysia (Shah Alam), New Zealand (Thames), the Philippines (Santa Rosa, Laguna), and Venezuela.
Corollas were made at NUMMI in Fremont, California until March 2010. Production resumed in November 2011 at Toyota Motor Manufacturing Mississippi in Blue Springs, Mississippi.
First generation (E10; 1966)
The first generation Corolla was introduced in November 1966 with the new 1100 cc K pushrod engine. The Corolla Sprinter was introduced as the fastback version in 1968, and exclusive to a Toyota Japan dealership retail outlet called Toyota Auto Store. It was the second car available to Japanese buyers at Toyota Corolla Store next to the Toyota Publica.
Second generation (E20; 1970)
In May 1970, the E20 was restyled with a more rounded body. The now mutually exclusive Corolla and Sprinter names were used to differentiate between two slightly different treatments of sheet metal and trim. The Corolla Levin and Sprinter Trueno names were introduced as the enhanced performance version of the Corolla and Sprinter respectively when a double overhead camshaft version of the 2T engine was introduced in March 1972 (TE27).
In September 1970, the 1400 cc T and 1600 cc 2T OHV engines were added to the range.
In Australia, only the 1.2 L engine (3K) powered 2-door KE20 was available as a sedan and wagon / panelvan. The brakes were single system with no booster, solid discs on the front and rear drums. Front sway bar but no rear sway bar. Parts are not compatible with later models.
In New Zealand, the 4-door KE20 was available alongside the 2-door KE25 and KE26 2-door wagon respectively.
Most models stopped production in July 1974 but the KE26 wagon and van were still marketed in Japan alongside the new 30-series, until production finally ended in May 1978.
Third generation (E30, E40, E50, E60; 1974)
April 1974 brought rounder, bigger and heavier Corollas and Sprinters. The range was rounded out with the addition of a two-door liftback. The Corollas were given E30 codes while the Sprinters were given E40 codes. A facelift in March 1976 saw most Corolla E30 models replaced by equivalent E50 models and most Sprinter E40 models were replaced by equivalent E60 models. The E30 Corolla was fitted with retracting front seat belts.
In Australia, the KE3x/KE5x was available as 4-door sedan (KE30/KE55), 2-door sedan (KE30), 2-door hardtop coupe (KE35/KE55), 2-door panel van (KE36/KE38), 4-door wagon (KE36/KE38) and a 2-door liftback (KE50/KE55). All KE3x models had 3K engines and K40 4-speed manual, K50 5 speed manual, 2-speed automatic or 3-speed automatic gearbox. Sprinters were not available. The KE5x models 4K engines. The KE55 was 50 kg heavier due to the addition of side impact protection in the doors, but due to a change in the body metal and seam sealing they are prone to rust. Later KE55s also used plastic ended bumper bars as opposed to the all chrome bumpers of the previous models, but included a rear sway bar for the first time.
Fourth generation (E70; 1979)
A major restyle in March 1979 brought a square edged design. The Corollas had a simpler treatment of the grill, headlights and tail lights while the Sprinter used a slightly more complex, sculptured treatment. The new A series engines were added to the range as a running change. This was the last model to use the K "hicam" and T series engines. Fuel injection was introduced as an extra cost option on Japanese market vehicles.
The wagon and van continued to be made until June 1987 after the rest of the range was replaced by the E80 generation.
Fifth generation (E80; 1983)
A sloping front bonnet and a contemporary sharp-edged, no-frills style was brought in during May 1983. The new 1839 cc 1C diesel engine was added to the range with the E80 Series. From 1985, re-badged E80 Sprinters were sold in the U.S. as the fifth-generation Chevrolet Nova. Fuel injection was introduced as an extra cost option internationally.
Most models now used the front-wheel drive layout except the AE85 and AE86, which were to be the last Corollas offered in the rear-wheel drive or FR layout. The AE85 and AE86 chassis codes were also used for the Sprinter (including the Sprinter Trueno). The Sprinter was nearly identical to the Corolla, differing only by minor body styling changes such as pop-up headlights.
This generation was made until 1990 in Venezuela.
Sixth generation (E90; 1987)
A somewhat more rounded and aerodynamic style was used for the E90 introduced in May 1987. Overall this generation has a more refined feel than older Corollas and other older subcompacts. Most models were now front-wheel drive, along with a few AWD All-Trac models. Many engines were used on a wide array of trim levels and models, ranging from the 1.3-liter 2E to the supercharged 4A-GZE. In the US, the E90 Sprinter was built and sold as both the Toyota Sprinter and the Geo Prizm. In Australia, the E90 Corolla was built and sold as both the Toyota Corolla and the Holden Nova.
In South Africa, this generation continued to be built until August 2006.
Seventh generation (E100; 1991)
In June 1991, Corollas received a redesign to be larger, heavier, and have the completely rounded, aerodynamic shape of the 1990s. In the United States, the somewhat larger Corolla was now in the compact class, rather than subcompact, and the coupé was still available in some markets, known as the AE101 Corolla Levin. Carburetors were mostly retired with this generation.
Eighth generation (E110; 1995)
Production of the E110 Corolla started in May 1995. The design of the car was slightly altered throughout but retained a look similar to that of the E100. In 1998, for the first time, some non-Japanese Corollas received the new 1ZZ-FE engine. The 1ZZ-FE engine had an aluminum engine block and aluminum cylinder heads, which made models powered by this motor lighter than versions powered by A series engines which had cast iron blocks with aluminium heads. The model range began to change as Toyota decided styling differences would improve sales in different markets. Starting with this generation, General Motors renamed the Geo Prizm, a rebadge of the Toyota Sprinter, as the Chevrolet Prizm when the Geo brand was discontinued.
This generation was delayed in North America until mid-1997 (US 1998 model year), where it had unique front and rear styling. Europe and Australasia received versions of their own as well. In Pakistan, this model was halted in November 1998, while production was closed in March 2002.
Ninth generation (E120, E130; 2000)
In August 2000, the E120 ninth-generation Corolla was introduced in Japan, with edgier styling and more technology to bring the nameplate into the 21st century. This version was sold in Japan, Australasia, Europe and the Middle East.
In mid-2001, the E120 Corolla Altis was released. It had a refreshed look and was slightly longer and wider than the E120 for other markets, but with similar body panels and interior. The Altis was sold in Southeast Asia, India, and Taiwan. India received a de-tuned version of the 1ZZ-FE and was comparatively slower than its rivals.
The North American release was delayed until March 2002 (for the 2003 model year). The E130 was sold in North America from 2003 to 2008. It had similar look to the Corolla Altis sold in Southeast Asia. The E120 continued in parallel in separate markets to the E130.
The station wagon model is called the Corolla Fielder in Japan. Production in Japan ended in January 2007 (for Corolla Runx and Allex), but production in North America continued until October 2007.
Production continued in China as the Corolla EX until February 2017.
Tenth generation (E140, E150; 2006)
Japan (E140 narrow)
The tenth generation of the E140 Corolla was introduced in October 2006. Japanese markets called the sedan Corolla Axio. The station wagon retained the Corolla Fielder name.
International (E140/E150 wide)
For international markets, a wider version of the E140 was sold with different styling, with the Southeast Asian, Pakistani, Indian and Taiwanese markets retaining the Corolla Altis branding. Production continued from June 2014 until 2020 in South Africa as the entry-level Corolla Quest.
In Australasia, the related first-generation Toyota Auris was also sold as the Corolla hatchback alongside the sedan body shape of the International E140 Corolla.
Eleventh generation (E160, E170, E180; 2012)
Japan (E160; 2012)
The eleventh generation of the Corolla went on sale in Japan in May 2012. The sedan is named the Corolla Axio while the wagon is called the Corolla Fielder. In Japan, both are made by a Toyota subsidiary, Central Motors, in Miyagi Prefecture, Japan. The redesigned model has slightly smaller exterior dimensions and is easier to drive in narrow alleys and parking lots for the targeted elderly drivers.
The new Corolla Axio is available with either a 1.3-liter 1NR-FE or 1.5-liter 1NZ-FE four-cylinder engines; front- or all-wheel drive. Both 5-speed manual and CVT transmissions are offered. The 1.3-liter engine and all-wheel-drive variants are available only with the CVT transmission. The Corolla Fielder is available with 1.5-liter 1NZ-FE or 1.8-liter 2ZR-FAE four cylinder engines mated with a CVT transmission. The 1.5-liter is available with front- and all-wheel drive, the 1.8-liter is offered only in front-wheel drive. Since 2015 there's a new engine 2NR-FKE, with its VVT-ie technology.
Toyota released hybrid versions of the Corolla Axio sedan and Corolla Fielder station wagon for the Japanese market in August 2013. Both cars are equipped with a 1.5-liter hybrid system similar to the one used in the Toyota Prius C, with a fuel efficiency of under the JC08 test cycle. Toyota's monthly sales target for Japan is 1,000 units of the Corolla Axio hybrid and 1,500 units of the Corolla Fielder hybrid.
The E160 was also sold in Hong Kong, Macau, and New Zealand.
International (E170/E180; 2013)
International markets continued on with the E140/E150 until at least 2013 when the E170/E180 model arrived. The E170/E180 is larger and substantially different from the Japanese E160, with a unique body and interior. Two basic front and rear styling treatments are fitted to the E170: a North American version that debuted first and a more conservative design for other markets that debuted later in 2013. The latter version sold in Southeast Asian, Pakistani, Indian and Taiwanese markets retained the Corolla Altis branding. The Corolla E180 went on sale in Europe and South Africa in February 2014.
In Australasia, the European market second-generation Toyota Auris was also sold badged as the Corolla hatchback, alongside the international E170 Corolla.
In 2015, for the 2016 model year, Toyota’s North American Sub-brand, Scion, introduced the Scion iM, based on the second generation Toyota Auris. In 2016, for the 2017 model year, the iM was rebranded as the Toyota Corolla iM when the Scion brand was discontinued.
Twelfth generation (E210; 2018)
The twelfth generation of the Corolla is available in three body styles:
Hatchback
The twelfth generation Corolla in hatchback body style was unveiled as a pre-production model in early March 2018 at the Geneva Motor Show as the Auris. The production version of the Corolla Hatchback for the North American market was unveiled on 28 March 2018 at the New York International Auto Show, with the official details and photos revealed on 22 March 2018. The Corolla Hatchback was launched in Japan on 27 June 2018 as the Corolla Sport. The Corolla Hatchback went on sale in the United States in mid-July 2018, and was later launched in Australia on 7 August 2018. Production of the European market Corolla Hatchback began on 14 January 2019, and sales began in the UK in February 2019 and across Europe in March 2019.
A high-performance variant of the Corolla hatchback, called the GR Corolla, debuted in March 2022.
Estate
The estate variation of the twelfth generation Corolla, called the Corolla Touring Sports (simply called Corolla Touring in Japan), was unveiled at the 2018 Paris Motor Show. The official images of the Corolla Touring Sports were revealed on 4 September 2018.
The Corolla Touring Sports is also sold by Suzuki as the Swace in Europe.
Sedan
The sedan variation of the Corolla was unveiled simultaneously between 15 and 16 November 2018 in Carmel-by-the-Sea, California, United States, and in China at the 2018 Guangzhou International Motor Show. The model is sold in two versions: Prestige (sold in China, Europe and other countries) and Sporty (sold in North America, Japan, Australia and other countries), and sold in China as the Levin. The Prestige model uses a different front fascia, which is more similar to the XV70 Camry. This model is sold as the Corolla Altis in Taiwan and Southeast Asia. The Sporty model uses a similar front fascia to the hatchback and wagon versions. A long-wheelbase version of the Prestige model with a slightly altered front fascia is sold as the Allion in China, while the long-wheelbase Sporty version is called the Levin GT.
Sales
Alternative versions
In Japan, the Corolla has always been exclusive to the Japanese retail sales chain called Toyota Corolla Store, which was previously established in 1961, known as Toyota Publica Store, selling the Publica. A rebadged version called the Sprinter was introduced around the same time as the Corolla in Japan, and sold through a different Toyota Japan dealership sales channel known since 1966 as Toyota Auto Store.
There have been several models over the years, including the Corolla Ceres (and similar Sprinter Marino) hardtop, Corolla Levin and Sprinter Trueno sports coupés and hatchbacks, and the Corolla FX hatchback, which became the Corolla RunX, while the Sprinter became the Allex, with the introduction of the E120 series Corolla. The RunX and Allex was replaced by the Auris in 2006 (known only as Corolla in markets outside Japan, Europe and South Africa). A luxury version of the Auris installed with V6 engines was briefly sold at Japanese Toyota dealerships Toyota Store and Toyopet Store locations as the Blade, which was sold until 2012.
A compact MPV named the Corolla Verso has also been released in European markets. Its Japanese counterpart is the Corolla Spacio, which has been discontinued as of the tenth generation. The Corolla Rumion is also sold in the US market as the Scion xB.
The Corolla Matrix, better known just as the Matrix, shares the E120 and E140 platforms, and is considered the hatchback/sport wagon counterpart of the North American Corolla sedan, as the European/Australasian Corolla hatchback is not sold there. Toyota frequently combines the sales figures of the Corolla sedan and Matrix. The Pontiac Vibe, which is the General Motors badged version of the Matrix, shares the Corolla platform. The Vibe was exported from Fremont, California, to the Japanese market where it was sold as the Toyota Voltz.
The Corolla Cross is the crossover SUV-counterpart of the E210 series Corolla.
Over many years, there have been rebadged versions of the Corolla, sold by General Motors, including the Holden Nova in Australia during the early 1990s, and the Sprinter-based Chevrolet Nova, Chevrolet Prizm, and Geo Prizm (in the United States as part of the GM S platform). The Corolla liftback (TE72) of Toyota Australia was badged as simply the T-18. The five-door liftback was sold with the Corolla Seca name in Australia and the nameplate survived on successive five-door models.
The Daihatsu Charmant was produced from the E30 to the E70 series.
The Tercel was a front wheel drive car, first introduced in 1980 at Japanese Toyota dealerships called Toyota Corolla Store, and was called the Corolla Tercel then, and later given its own name in 1984. The Tercel platform was also used for the Corolla II hatchback in Japan.
| Technology | Specific automobiles | null |
737798 | https://en.wikipedia.org/wiki/Hydroxyl%20radical | Hydroxyl radical | The hydroxyl radical, •HO, is the neutral form of the hydroxide ion (HO–). Hydroxyl radicals are highly reactive and consequently short-lived; however, they form an important part of radical chemistry. Most notably hydroxyl radicals are produced from the decomposition of hydroperoxides (ROOH) or, in atmospheric chemistry, by the reaction of excited atomic oxygen with water. It is also an important radical formed in radiation chemistry, since it leads to the formation of hydrogen peroxide and oxygen, which can enhance corrosion and stress corrosion cracking in coolant systems subjected to radioactive environments. Hydroxyl radicals are also produced during UV-light dissociation of H2O2 (suggested in 1879) and likely in Fenton chemistry, where trace amounts of reduced transition metals catalyze peroxide-mediated oxidations of organic compounds.
In organic synthesis hydroxyl radicals are most commonly generated by photolysis of 1-Hydroxy-2(1H)-pyridinethione.
The hydroxyl radical is often referred to as the "detergent" of the troposphere because it reacts with many pollutants, often acting as the first step to their removal. It also has an important role in eliminating some greenhouse gases like methane and ozone. The rate of reaction with the hydroxyl radical often determines how long many pollutants last in the atmosphere, if they do not undergo photolysis or are rained out. For instance, methane, which reacts relatively slowly with hydroxyl radicals, has an average lifetime of >5 years and many CFCs have lifetimes of 50+ years. Pollutants, such as larger hydrocarbons, can have very short average lifetimes of less than a few hours.
The first reaction with many volatile organic compounds (VOCs) is the removal of a hydrogen atom, forming water and an alkyl radical (R•):
•HO + RH → H2O + R•
The alkyl radical will typically react rapidly with oxygen forming a peroxy radical:
R• + O2 → RO2
The fate of this radical in the troposphere is dependent on factors such as the amount of sunlight, pollution in the atmosphere and the nature of the alkyl radical that formed it (See chapters 12 & 13 in | Physical sciences | Hydrogen compounds | Chemistry |
738149 | https://en.wikipedia.org/wiki/K%C3%A1rm%C3%A1n%20line | Kármán line | The Kármán line (or von Kármán line ) is a conventional definition of the edge of space. It is not universally accepted. The international record-keeping body FAI (Fédération aéronautique internationale) defines the Kármán line at an altitude of above mean sea level.
While named after Theodore von Kármán, who calculated a theoretical limit of altitude for airplane flight at above Earth, the later established Kármán line is more general and has no distinct physical significance, in that there is a rather gradual difference between the characteristics of the atmosphere at the line, and experts disagree on defining a distinct boundary where the atmosphere ends and space begins. It lies well above the altitude reachable by conventional airplanes or high-altitude balloons, and is approximately where satellites, even on very eccentric trajectories, will decay before completing a single orbit.
The Kármán line is mainly used for legal and regulatory purposes of differentiating between aircraft and spacecraft, which are then subject to different jurisdictions and legislations. While international law does not define the edge of space, or the limit of national airspace, most international organizations and regulatory agencies (including the United Nations) accept the FAI's Kármán line definition or something close to it. As defined by the FAI, the Kármán line was established in the 1960s. Various countries and entities define space's boundary differently for various purposes.
Definition
The FAI uses the term Kármán line to define the boundary between aeronautics and astronautics:
Interpretations of the definition
The expressions "edge of space" or "near space" are often used (by, for instance, the FAI in some of their publications) to refer to a region below the boundary of Outer Space, which is often meant to include substantially lower regions as well. Thus, certain balloon or airplane flights might be described as "reaching the edge of space". In such statements, "reaching the edge of space" merely refers to going higher than average aeronautical vehicles commonly would.
There is still no international legal definition of the demarcation between a country's air space and outer space. In 1963, Andrew G. Haley discussed the Kármán line in his book Space Law and Government. In a chapter on the limits of national sovereignty, he made a survey of major writers' opinions. He indicated the inherent imprecision of the Line:
The line represents a mean or median measurement. It is comparable to such measures used in the law as mean sea level, meander line, tide line; but it is more complex than these. In arriving at the von Kármán jurisdictional line, myriad factors must be considered – other than the factor of aerodynamic lift. These factors have been discussed in a very large body of literature and by a score or more of commentators. They include the physical constitution of the air; the biological and physiological viability; and still other factors which logically join to establish a point at which air no longer exists and at which airspace ends.
Kármán's comments
In the final chapter of his autobiography, Kármán addresses the issue of the edge of outer space:
Where space begins ... can actually be determined by the speed of the space vehicle and its altitude above the Earth. Consider, for instance, the record flight of Captain Iven Carl Kincheloe Jr. in an X-2 rocket plane. Kincheloe flew 2000 miles per hour (3,200 km/h) at 126,000 feet (38,500 m), or 24 miles up. At this altitude and speed, aerodynamic lift still carries 98 percent of the weight of the plane, and only two percent is carried by inertia, or Kepler force, as space scientists call it. But at 300,000 feet (91,440 m) or 57 miles up, this relationship is reversed because there is no longer any air to contribute lift: only inertia prevails. This is certainly a physical boundary, where aerodynamics stops and astronautics begins, and so I thought why should it not also be a jurisdictional boundary? Andrew G. Haley has termed it the Kármán Jurisdictional Line. Below this line, space belongs to each country. Above this level there would be free space.
Technical considerations
An atmosphere does not abruptly end at any given height but becomes progressively less dense with altitude. Also, depending on how the various layers that make up the space around the Earth are defined (and depending on whether these layers are considered part of the actual atmosphere), the definition of the edge of space could vary considerably: If one were to consider the thermosphere and exosphere part of the atmosphere and not of space, one might have to extend the boundary of space to at least above sea level. The Kármán line thus is a largely arbitrary definition based on some technical considerations.
An aircraft can stay aloft only by constantly traveling forward relative to the air (rather than the ground), so that the wings can generate aerodynamic lift. The thinner the air, the faster the plane must go to generate enough lift to stay up. At very high speeds, centrifugal force (Kepler force) contributes to maintaining altitude. This is the virtual force that keeps satellites in circular orbit without any aerodynamic lift.
As altitude increases and air density decreases, the speed to generate enough aerodynamic lift to support the aircraft weight increases until the speed becomes so high that the centrifugal force contribution becomes significant. At a high enough altitude, the centrifugal force will dominate over the lift force and the aircraft would become effectively an orbiting spacecraft instead of an aircraft supported by aerodynamic lift.
In 1956, von Kármán presented a paper in which he discussed aerothermal limits to flight. The faster aircraft fly, the more heat they would generate due to aerodynamic heating from friction with the atmosphere and adiabatic processes. Based on the current state of the art, he calculated the speeds and altitudes at which continuous flight was possible—fast enough that enough lift would be generated and slow enough that the vehicle would not overheat. The chart included an inflection point at around , above which the minimum speed would place the vehicle into orbit.
The term "Kármán line" was invented by Andrew G. Haley in a 1959 paper, based on the chart in von Kármán's 1956 paper, but Haley acknowledged that the limit was theoretical and would change as technology improved, as the minimum speed in von Kármán's calculations was based on the speed-to-weight ratio of current aircraft, namely the Bell X-2, and the maximum speed based on current cooling technologies and heat-resistant materials. Haley also cited other technical considerations for that altitude, as it was approximately the altitude limit for an airbreathing jet engine based on current technology. In the same 1959 paper, Haley also referred to as the "von Kármán Line", which was the lowest altitude at which free-radical atomic oxygen occurred.
Alternatives to the FAI definition
The U.S. Armed Forces definition of an astronaut is a person who has flown higher than above mean sea level, approximately the line between the mesosphere and the thermosphere. NASA formerly used the FAI's figure, though this was changed in 2005 to eliminate any inconsistency between military personnel and civilians flying in the same vehicle. Three veteran NASA X-15 pilots (John B. McKay, William H. Dana and Joseph Albert Walker) were retroactively (two posthumously) awarded their astronaut wings, as they had flown between and during the 1960s, but at the time had not been recognized as astronauts. The latter altitude, achieved twice by Walker, exceeds the modern international definition of the boundary of space.
The United States Federal Aviation Administration also recognizes this line as a space boundary:
Works by Jonathan McDowell (Harvard-Smithsonian Center for Astrophysics) and Thomas Gangale (University of Nebraska-Lincoln) in 2018 advocate that the demarcation of space should be at , citing as evidence von Kármán's original notes and calculations (which concluded the boundary should be 270,000 ft), confirmation that orbiting objects can survive multiple perigees at altitudes around 80 to 90 km, plus functional, cultural, physical, technological, mathematical, and historical factors. More precisely, the paper summarizes:
These findings prompted the FAI to propose holding a joint conference with the International Astronautical Federation (IAF) in 2019 to "fully explore" the issue.
Another definition proposed in international law discussions defines the lower boundary of space as the lowest perigee attainable by an orbiting space vehicle, but does not specify an altitude. This is the definition adopted by the U.S. military. Due to atmospheric drag, the lowest altitude at which an object in a circular orbit can complete at least one full revolution without propulsion is approximately , whereas an object can maintain an elliptic orbit with perigee as low as about without propulsion. The U.S. government is resisting efforts to specify a precise regulatory boundary.
For other planets
While the Kármán line is defined for Earth only, several scientists have estimated the corresponding figures for Mars and Venus. Isidoro Martínez arrived at and high, respectively, while Nicolas Bérend arrived at and .
In popular culture
In 2014, Oscar Sharp directed The Kármán Line, a British live-action drama short film starring Olivia Colman as Sarah, a wife and mother who suddenly starts levitating until she slowly and eventually crosses the eponymous Kármán line and into outer space.
| Physical sciences | Atmosphere: General | Earth science |
738178 | https://en.wikipedia.org/wiki/Orbital%20spaceflight | Orbital spaceflight | An orbital spaceflight (or orbital flight) is a spaceflight in which a spacecraft is placed on a trajectory where it could remain in space for at least one orbit. To do this around the Earth, it must be on a free trajectory which has an altitude at perigee (altitude at closest approach) around ; this is the boundary of space as defined by NASA, the US Air Force and the FAA. To remain in orbit at this altitude requires an orbital speed of ~7.8 km/s. Orbital speed is slower for higher orbits, but attaining them requires greater delta-v. The Fédération Aéronautique Internationale has established the Kármán line at an altitude of as a working definition for the boundary between aeronautics and astronautics. This is used because at an altitude of about , as Theodore von Kármán calculated, a vehicle would have to travel faster than orbital velocity to derive sufficient aerodynamic lift from the atmosphere to support itself.
Due to atmospheric drag, the lowest altitude at which an object in a circular orbit can complete at least one full revolution without propulsion is approximately .
The expression "orbital spaceflight" is mostly used to distinguish from sub-orbital spaceflights, which are flights where the apogee of a spacecraft reaches space, but the perigee is too low.
Trajectory
There are three main "bands" of orbit around the Earth: low Earth orbit (LEO), medium Earth orbit (MEO), and geostationary orbit (GEO).
According to orbital mechanics, an orbit lies in a particular, largely fixed plane around the Earth, which coincides with the center of the Earth, and may be inclined with respect to the equator. The relative motion of the spacecraft and the movement of the Earth's surface, as the Earth rotates on its axis, determine the position that the spacecraft appears in the sky from the ground, and which parts of the Earth are visible from the spacecraft.
It is possible to calculate a ground track that shows which part of the Earth a spacecraft is immediately above; this is useful for helping to visualise the orbit.
Launch
Orbital spaceflight from Earth has only been achieved by launch vehicles that use rocket engines for propulsion. To reach orbit, the rocket must impart to the payload a delta-v of about 9.3–10 km/s. This figure is mainly (~7.8 km/s) for horizontal acceleration needed to reach orbital speed, but allows for atmospheric drag (approximately 300 m/s with the ballistic coefficient of a 20 m long dense fueled vehicle), gravity losses (depending on burn time and details of the trajectory and launch vehicle), and gaining altitude.
The main proven technique involves launching nearly vertically for a few kilometers while performing a gravity turn, and then progressively flattening the trajectory out at an altitude of 170+ km and accelerating on a horizontal trajectory (with the rocket angled upwards to fight gravity and maintain altitude) for a 5–8-minute burn until orbital velocity is achieved. Currently, 2–4 stages are needed to achieve the required delta-v. Most launches are by expendable launch systems.
The Pegasus rocket for small satellites instead launches from an aircraft at an altitude of .
There have been many proposed methods for achieving orbital spaceflight that have the potential of being much more affordable than rockets. Some of these ideas such as the space elevator, and rotovator, require new materials much stronger than any currently known. Other proposed ideas include ground accelerators such as launch loops, rocket-assisted aircraft/spaceplanes such as Reaction Engines Skylon, scramjet powered spaceplanes, and RBCC powered spaceplanes. Gun launch has been proposed for cargo.
From 2015 SpaceX have demonstrated significant progress in their more incremental approach to reducing the cost of orbital spaceflight. Their potential for cost reduction comes mainly from pioneering propulsive landing with their reusable rocket booster stage as well as their Dragon capsule, but also includes reuse of the other components such as the payload fairings and the use of 3D printing of a superalloy to construct more efficient rocket engines, such as their SuperDraco. The initial stages of these improvements could reduce the cost of an orbital launch by an order of magnitude.
Stability
An object in orbit at an altitude of less than roughly 200 km is considered unstable due to atmospheric drag. For a satellite to be in a stable orbit (i.e. sustainable for more than a few months), 350 km is a more standard altitude for low Earth orbit. For example, on 1 February 1958 the Explorer 1 satellite was launched into an orbit with a perigee of . It remained in orbit for more than 12 years before its atmospheric reentry over the Pacific Ocean on 31 March 1970.
However, the exact behaviour of objects in orbit depends on altitude, their ballistic coefficient, and details of space weather which can affect the height of the upper atmosphere.
Orbit maintenance
Orbital maneuver
In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM).
Deorbit and re-entry
Returning spacecraft (including all potentially crewed craft) have to find a way of slowing down as much as possible while still in higher atmospheric layers and avoiding hitting the ground (lithobraking) or burning up. For many orbital space flights, initial deceleration is provided by the retrofiring of the craft's rocket engines, perturbing the orbit (by lowering perigee down into the atmosphere) onto a suborbital trajectory. Many spacecraft in low Earth orbit (e.g., nanosatellites or spacecraft that have run out of station keeping fuel or are otherwise non-functional) solve the problem of deceleration from orbital speeds through using atmospheric drag (aerobraking) to provide initial deceleration. In all cases, once initial deceleration has lowered the orbital perigee into the mesosphere, all spacecraft lose most of the remaining speed, and therefore kinetic energy, through the atmospheric drag effect of aerobraking.
Intentional aerobraking is achieved by orienting the returning space craft so as to present the heat shields forward toward the atmosphere to protect against the high temperatures generated by atmospheric compression and friction caused by passing through the atmosphere at hypersonic speeds. The thermal energy is dissipated mainly by compression heating the air in a shockwave ahead of the vehicle using a blunt heat shield shape, with the aim of minimising the heat entering the vehicle.
Sub-orbital space flights, being at a much lower speed, do not generate anywhere near as much heat upon re-entry.
Even if the orbiting objects are expendable, most space authorities are pushing toward controlled re-entries to minimize hazard to lives and property on the planet.
History
Sputnik 1 was the first human-made object to achieve orbital spaceflight. It was launched on 4 October 1957 by the Soviet Union.
Vostok 1, launched by the Soviet Union on 12 April 1961, carrying Yuri Gagarin, was the first successful human spaceflight to reach Earth orbit.
Vostok 6, launched by the Soviet Union on 16 June 1963, carrying Valentina Tereshkova, was the first successful spaceflight by a woman to reach Earth orbit.
Crew Dragon Demo-2, launched by SpaceX and the United States on 30 May 2020, was the first successful human spaceflight by a private company to reach Earth orbit.
| Technology | Basics_6 | null |
4614583 | https://en.wikipedia.org/wiki/Sea%20toad | Sea toad | The sea toads and coffinfishes are a family, the Chaunacidae, of deep-sea ray-finned fishes belonging to the monotypic suborder Chaunacoidei within the order Lophiiformes, the anglerfishes. These are bottom-dwelling fishes found on the continental slopes of the Atlantic, Indian, and Pacific Oceans, at depths to at least . There have also been findings of deep-sea anglerfishes off the coasts of Australia and New Caledonia. Other findings suggest some genera of Chaunacidae are found near volcanic slopes encrusted with manganese. Of the two genera in the family, Chaunacops are typically found at deeper depths than Chaunax, but with considerable overlap between the two genera.
Taxonomy
The sea toads were first proposed as a separate family, the Chaunacidae, by the American biologist Theodore Gill in 1863. Charles Tate Regan placed this family within the division Antennariformes within his suborder Lophiodea when he classified the order Pediculati, his grouping of the toadfishes and anglerfishes. In 1981 Theodore Wells Piestch III realised that the monophyly of Regan's 1912 groupings within his Lophoidea had not been confirmed. Pietsch proposed a sister relationship between the sea toads and the Ogcocephalidae, however, was unable to identify any grouping that was a sister group to both the Chaunacidae and Ogcocephalidae nor did he find any osteological characters to support or otherwise their classification within the Antennariiformes. He, tenatively, retained both groups within the Antennarioidei even although he was unable to establish the monophyly of the four families Regan classified in the Antennariiformes in 1912. In 1987 Pietsch and David B. Grobecker classified the seatoads in the monotypic suborder Chaunacoidei within the Lophiiformes. This is the classification followed by the 5th edition of Fishes of the World.
Etymology
The sea toad family, Chaunacidae, is named for its type genus Chaunax, this name means "one who gapes", from chanos meaning "to gape", an allusion to the large, wide mouths of these fishes.
Genera
The sea toads are divided into two genera:
Description
Sea toads have large, globose bodies and short, compressed tails, and are covered with small, spiny scales. The largest are about in length. During their gill ventilatory cycle, Chaunacidae are able to take in high volumes of water, increasing their total body volume by 30%. The first dorsal fin ray is modified into a short bioluminescent lure which dangles forward over the mouth, which is turned upwards so as to be nearly vertical. The sensory canals of the lateral lines are especially conspicuous. Chaunax have modified fins which resemble legs. It was also found that they use these modified pelvic fins to assist with maneuvering their swimming, especially when as an escape response. Chaunacops have shorter lures that resemble a cue-tip that sits between their eyes. Their bodies are covered in lots of small needles that are thought to offer protection or sensory signaling sites. Despite the spiky nature of the needles they give the fish a fuzzy crocheted disposition making them quite visually distinct. Similar to the Chaunax, they also have modified fins that allow them to walk along the sea floor which is thought to provide both a hunting and metabolic advantage.
Sea toads are mostly sedentary fish, and rely on a more opportunistic way of hunting where they prey on anything within reach. The sensory canals of the lateral lines are especially conspicuous, and confers advantages in avoidance of predators and consumption of prey.
Sexual dimorphism
A species from Chaunacidae, Chaunacops melanostomus, exhibits a single trait showing sexual dimorphism. Sample collection shows that males tend to have larger nostrils than females, and even in the smallest males, nostrils tend to be very apparent.
Distribution and habitat
Three species of Chaunocops are currently known, all of which live in the Indo-west Pacific Ocean. There are C. coloratus, C. melanostomus and C. spinosus. However, members of the family Chaunacidae have been collected from the Eastern Indian Ocean, the Eastern Pacific Ocean, and the Western Atlantic Ocean, showing that this family is relatively widely distributed. Namely, in 1989 a study was done by John H. Caruso in which 21 specimens of Chaunacid fish were collected off the western coast of Australia, many of which were collected at approximately -30° latitude, and approximately 90° longitude. These specimen were from the genus Bathychaunax, which before this study only contained 2 other species: B. coloratus of the Eastern Pacific, and B. roseus from the Western Atlantic. The new species of Bathychaunax was found at depths between 1320 m and 1760 m. Furthermore, in 2015 an article was published indicating that new specimens from the genus Chaunacops were found off the coasts of Australia and New Caledonia.
In addition, it was found that the Chaunacops coloratus are also often found near "manganese-encrusted volcanic talus slopes". The fish were observed to often have one of their pectoral fins in sediment and another one on a rock in order to make it seem as though they were wedged between two substrates. The average oxygen concentration was found to be about 1.59 mL/L at the depths they were found and the average temperature was about 1.68 °C. Salinity in their habitats did not change much and was found to be an average of 34.64 psu.
Anatomy
Chaunacops spinosis
Upon collection and examination of this species, it is observed to have several distinct physical attributes. One trait is the fine dermal spinules, along with simple and bifurcate dermal spinules, covering the body. It also has four pectoral lateral-line neuromasts, which are sensory organs characteristic to fish and aquatic organisms. It has a greyish mouth, and semi-transparent, light-greyish skin. Inside the mouth are several rows of teeth. There are three or four rows of small canine teeth on the upper jaw, and three rows of the same on the lower jaw. The skin of the head, belly, and most of gill chamber is dark blue, and it has a relatively short tail. The overall bodily structure resembles that of a tadpole, with a more globular shape in the anterior which tapers in the posterior. The eyes are covered by transparent skin and are very small.
Chaunacops melanostomus
Another species in the same genus was collected with similar traits to the above species, but some noticeable differences. The spinules are distributed widely throughout the body, similar to C. spinosis, but are simple with a large base (different from that of C. spinosis which has simple as well as bifurcate dermal spindles). They also differ slightly in color. The inside of the mouth, the head, the gill chamber, and the anterior portion of the body are dark brown to black. The dorsal side of the body, and the caudal fin are light brown, and becomes more lightly colored going towards the posterior end. Also, instead of having three or four rows of teeth, C. melanostomus has two rows on both jaws. The general body plan, however, is virtually the same, resembling a tadpole, with a more globular shape in the anterior which tapers in the posterior.
Chaunacops coloratus
The Chaunacops coloratus are another species that were discovered and are known for their bright red and blue colors. It was found that the blue C. coloratus often had an average length of about 110 mm, whereas the red specimen had an average length of 184 mm. It was observed that the specimen begin in a transparent larval form, then become blue, and eventually reach their adult red color. As for predation, it is hypothesized that the specimen turning red is advantageous for ambushing predators that use bioluminescent light to attract possible prey, since the red coloration of this specimen would conceal the predator and make it invisible.
Movement
C. coloratus
Through observations made by an ROV, it was found that the C. coloratus swim vertically with their head oriented upwards. While in rapid ascent, the specimen will use their dorsal, caudal, and anal fins to propel themselves upwards and tuck the rest of their fins in close to their body. The observation collected found that the specimen had average velocities of 0.036 m/s and 0.021 m/s while ascending. As for maneuvering across the ocean floor, observations found that the specimen use their pectoral and pelvic fins. In order to perform this "walking", they use their dorsal fin from one side to side, then thrust their caudal fin repeatedly, and then maneuver using their pectoral and pelvic fins. It was also observed that these specimen are capable of walking backwards using their pectoral and pelvic fins.
Breathing
Fish of the family Chaunacidae have been shown to have slow ventilatory cycles in which the fish exhales 20–30% of their body volume of water. Upon inhalation, Chaunacidae can endure long periods of time maintaining a fully inflated gill chamber, sometimes up to 245 seconds which confers many potential advantages for fish of this family. Chaunacidae have been found to contain a specialized apparatus containing adductor muscles that can maintain its ventilatory cycle, and control the volume of water entering and exiting. These muscles are cross-hatched, and function to not only inhale and exhale, but to prevent any leakage out of the gills.
Due to the high-volume and slow ventilatory cycle, Chaunacidae are able to be majorly energy efficient as they require less energy to push water across the surface of their gills. Because of this, Chaunacidae are able to go without prey for long periods of time, and remain mostly sedentary.
There are many other hypotheses of advantages conferred by the breathing cycle of Chaunacidae. Due to the long periods of high-volume inhalation, Chaunacidae makes little disturbance of lateral line systems, allowing for better hunting and avoidance of predators. In addition, the maximally filled mouth of the Chaunacidae is often intimidating to predators, making it a defense mechanism that the fish can use much like the pufferfish.
Diet
Chaunacidae are known to be mostly sedentary fish, and spend most of their time dormant on the seafloor. Because of their energy efficient way of ventilation, Chaunacidae are able to go long periods of time with little food. In a diet study, Chaunax fimbriatus was found to contain a stomach that contained many different prey, showing that Chaunacidae are opportunistic hunters that will eat most anything it can on the seafloor.
Chaunacidae are also steady hunters, as they are able to maintain relatively low movement. Due to their gill chambers, Chaunacidae are able to remain still enough until their prey is within distance.
| Biology and health sciences | Acanthomorpha | Animals |
4615016 | https://en.wikipedia.org/wiki/Domestic%20canary | Domestic canary | The domestic canary, often simply known as the canary (Serinus canaria forma domestica), is a domesticated form of the wild canary, a small songbird in the finch family originating from the Macaronesian Islands of the Azores, Madeira and the Canary Islands.
Canaries were first bred in captivity in the 17th century, having been brought to Europe by Spanish sailors. Monks started breeding them and only sold the males (which sing). This kept the birds in short supply and drove the price up. Eventually, Italians obtained hens and were able to breed the birds. This made them very popular, resulting in many breeds arising, and the birds being bred all over Europe.
The same occurred in England. First the birds were only owned by the rich, but eventually the local citizens started to breed them and, again, they became very popular. Many breeds arose through selective breeding, and they are still very popular today for their voices.
From the 18th up to the 20th centuries, canaries and finches were used in the UK, Canada and the US in the coal mining industry to detect carbon monoxide. In the UK, this practice ceased in 1986.
Typically, the domestic canary is kept as a popular cage and aviary bird. Given proper housing and care, a canary's lifespan ranges from 10 to 15 years.
Etymology
The birds are named after Spain's Canary Islands, which derive their name from the Latin (after one of the larger islands, Gran Canaria), meaning 'island of dogs', due to its "vast multitudes of dogs of very large size".
Varieties
Domestic canaries are generally divided into three main groups:
Colour-bred canaries (bred for their many colour mutations – Ino, Eumo, Satinette, Bronze, Ivory, Onyx, Mosaic, Brown, red factor, Green (Wild Type): darkest black and brown melanin shade in yellow ground birds, Yellow Melanin: mutation showing yellow ground colour with brown and black pigment, Yellow Lipochrome: mutation creating the loss of brown and black pigment, leaving yellow ground colour etc.)
Type canaries (bred for their shape and conformation – Australian plainhead, Berner, Border, Fife, Gibber Italicus, Gloster, Lancashire, Raza Española, Yorkshire, etc.)
Song canaries (bred for their unique and specific song patterns – Spanish Timbrado, German Roller (also known as Harz Roller), Waterslager (also known as "Malinois"), American Singer, Russian Singer, Persian Singer).
While wild canaries are a yellowish-green colour, domestic canaries have been selectively bred for a wide variety of colours, such as yellow, orange, brown, black, white, and red (the colour red was introduced to the domestic canary through hybridisation with the red siskin (Spinus cucullatus), a species of South American finch). Evidence of hybridization has also been found between the domestic canary (S. canaria domestica) and the black-chinned siskin (Spinus barbatus) in captivity.
Midway Atoll is home to a colony of feral yellow canaries, descended from pet birds introduced in 1909 by employees of the Commercial Pacific Cable Company. An estimated 500 canaries, which have retained their bright yellow plumage, are resident on Sand Island.
Competitions
Canaries are judged in competitions following the annual molt in the summer. This means that in the Northern Hemisphere the show season generally begins in October or November and runs through December or January. Birds can only be shown by the persons who raised them. A show bird must have a unique band on its leg indicating the year of birth, the band number, and the club to which the breeder belongs.
There are many canary shows all over the world. The world show (C.O.M. - Confederation Ornithologique Mondiale) is held in Europe each year and attracts thousands of breeders. As many as 20,000 birds are brought together for this competition.
Miner's canary
Mice were used as sentinel species for use in detecting carbon monoxide in British coal mining from around 1896, after the idea had been suggested in 1895 by John Scott Haldane. Toxic gases such as carbon monoxide, asphyxiant gases such as carbon dioxide and explosive gases like methane in the mine would affect small warm-blooded animals before affecting the miners, since their respiratory exchange is more rapid than in humans. A mouse will be affected by carbon monoxide within a few minutes, while a human will have an interval of 20 times as long. Later, canaries were found to be more sensitive and a more effective indicator as they showed more visible signs of distress. Their use in mining is documented from around 1900. The birds were sometimes kept in carriers which had small oxygen bottles attached to revive them. The use of miners' canaries in British mines was phased out in 1986.
The phrase "canary in a coal mine" is frequently used to refer to a person or thing which serves as an early warning of a coming crisis. By analogy, the term "climate canary" is used to refer to a species (called an indicator species) that is affected by an environmental danger prior to other species, thus serving as an early warning system for the other species with regard to the danger.
Use in research
Canaries have been extensively used in research to study neurogenesis, or the birth of new neurons in the adult brain, and also for basic research in order to understand how songbirds encode and produce song. Thus, canaries have served as model species for discovering how the vertebrate brain learns, consolidates memories, and recalls coordinated motor movements.
Fernando Nottebohm, a professor at the Rockefeller University in New York City, detailed the avian brain structures and pathways that are involved in the production of bird song.
Canaries are sometimes used to avoid hazardous human testing. Wasicky et al. 1949 used them in early testing of insect repellents. Human testing could only provide limited sample size and the inherent variance of the host ⇔ repellent ⇔ insect interaction is too high. Canaries, among other test animals, provided larger sample sizes cheaply.
In culture
In organized crime, the canary symbolizes an informant who "sings" to the police.
Canaries have been depicted in cartoons from the mid-20th century as being harassed by domestic cats; the most famous cartoon canary is Warner Bros.' "Tweety".
Norwich City, an English football team, is nicknamed "the Canaries" due to the city once being a famous centre for breeding and export of the birds. The club adopted the colours of yellow and green in homage. Jacob Mackley, of Norwich, won many prizes with birds of the local variety and shipped about 10,000 from Norwich to New York every year. A number of other sports teams worldwide use variations of the name "Canaries", such as Atlético Morelia (Mexico), Botev Plovdiv (Bulgaria), Frosinone (Italy), Koper (Slovenia), FC Novi Sad (Serbia), Fenerbahçe (Turkey), Lillestrøm SK (Norway), Kedah FA (Malaysia), IAPE (Maranhão, Brazil), the Brazil national football team and the Brazil women's national football team.
In Allentown, Pennsylvania, the mascot of the city's largest high school, William Allen High School, are the canaries.
| Biology and health sciences | Passerida | Animals |
7960760 | https://en.wikipedia.org/wiki/Hubble%20volume | Hubble volume | In cosmology, a Hubble volume (named for the astronomer Edwin Hubble) or Hubble sphere, Hubble bubble, subluminal sphere, causal sphere and sphere of causality is a spherical region of the observable universe surrounding an observer beyond which objects recede from that observer at a rate greater than the speed of light due to the expansion of the universe. The Hubble volume is approximately equal to 1031 cubic light years (or about 1079 cubic meters).
The proper radius of a Hubble sphere (known as the Hubble radius or the Hubble length) is , where is the speed of light and is the Hubble constant. The surface of a Hubble sphere is called the microphysical horizon, the Hubble surface, or the Hubble limit.
More generally, the term Hubble volume can be applied to any region of space with a volume of order . However, the term is also frequently (but mistakenly) used as a synonym for the observable universe; the latter is larger than the Hubble volume.
The center of the Hubble volume and observable universe is arbitrary in relation to the overall universe; instead it is centered around its origin (impersonal or personal "observer").
The Hubble length is 14.4 billion light years in the standard cosmological model, equivalent to times Hubble time. The Hubble time is the reciprocal of the Hubble constant, and is slightly larger than the age of the universe (13.8 billion years) as it is the age the universe would have had if expansion was linear.
Hubble limit as an event horizon
For objects at the Hubble limit, the space between us and the object of interest has an average expansion speed of c. So, in a universe with constant Hubble parameter, light emitted at the present time by objects outside the Hubble limit would never be seen by an observer on Earth. That is, the Hubble limit would coincide with a cosmological event horizon (a boundary separating events visible at some time and those that are never visible). See Hubble horizon for more details.
However, the Hubble parameter is not constant in various cosmological models so that the Hubble limit does not, in general, coincide with a cosmological event horizon. For example, in a decelerating Friedmann universe the Hubble sphere expands with time, and its boundary overtakes light emitted by more distant galaxies so that light emitted at earlier times by objects outside the Hubble volume still may eventually arrive inside the sphere and be seen by us. Similarly, in an accelerating universe with a decreasing Hubble constant, the Hubble volume expands with time and can overtake light from sources previously receding relative to us. In both of these circumstances, the cosmological event horizon lies beyond the Hubble Horizon. In a universe with an increasing Hubble constant, the Hubble horizon will contract, and its boundary overtakes light emitted by nearer galaxies so that light emitted at earlier times by objects inside the Hubble sphere will eventually recede outside the sphere and will never be seen by us. If the shrinkage of the Hubble volume does not stop due to some yet unknown phenomenon (one suggestion is the "early phase transition"), the Hubble volume will become nearly a point (due to the uncertainty principle pure singularities are impossible; also a proportion of their self-interactions are energetic enough to produce escaping particles via quantum tunneling), meeting the criteria of big bang. The justification of this view is that no subluminal Hubble volume will exist and pointwise superluminal expansion (the generalization of the Big Bang theory) will prevail everywhere or at least in a vast region of the universe. In this cyclic cosmology (there are many other cyclic versions) the universe always expands and does not revert to a smaller default size (non-conformal or expandatory conformal, non-Penrosean expandatory cyclic cosmology).
Observations indicate that the expansion of the universe is accelerating, and the Hubble constant is thought to be decreasing. Thus, sources of light outside the Hubble horizon but inside the cosmological event horizon can eventually reach us. A fairly counter-intuitive result is that photons we observe from the first ~5 billion years of the universe come from regions that are, and always have been, receding from us at superluminal speeds.
| Physical sciences | Physical cosmology | Astronomy |
7962344 | https://en.wikipedia.org/wiki/Rod%20end%20bearing | Rod end bearing | A rod end bearing, also known as a heim joint (N. America) or rose joint (U.K. and elsewhere), is a mechanical articulating joint. Such joints are used on the ends of control rods, steering links, tie rods, or anywhere a precision articulating joint is required, and where a clevis end (which requires perfect 90-degree alignment between the attached shaft and the second component) is unsuitable. A ball swivel with an opening through which a bolt or other attaching hardware may pass is pressed into a circular casing with a threaded shaft attached. The threaded portion may be either male or female. The heim joint's advantage is that the ball insert permits the rod or bolt passing through it to be misaligned to a limited degree (an angle other than 90 degrees). A link terminated in two heim joints permits misalignment of their attached shafts (viz., other than 180 degrees).
History
The spherical rod end bearing was developed by Nazi Germany during World War II. When one of the first German planes to be shot down by the British in early 1940 was examined, they found this joint in use in the aircraft's control systems. Following this discovery, the Allied governments gave the H.G. Heim Company an exclusive patent to manufacture these joints in North America, while in the UK the patent passed to Rose Bearings Ltd. The ubiquity of these manufacturers in their respective markets led to the terms heim joint and rose joint becoming synonymous with their product. After the patents ran out the common names stuck, although rosejoint remains a registered trademark of Minebea Mitsumi Inc., successor to Rose Bearings Ltd. Originally used in aircraft, the rod end bearing may be found in cars, trucks, race cars, motorcycles, lawn tractors, boats, industrial machines, go-karts, radio-control helicopters, formula cars, and many more applications.
Female heim joint
Using female heim joints will allow users to make precise changes on key components of fixtures. One example of needing fine adjustment is within the helicopter’s adjustment of the blades. When using the adjustment, it is key to make sure it is in the correct spot or excessive wear will occur. This change allows quick adjustments that are easy with a female heim joint. When dealing with the pitch of a helicopter blades, heim joints are able to be adjusted to 0.010in. If spacing is critical, female heim joints are able to be threaded on, instead of welding inserts to the shaft. When dealing with aluminium shafts, the easiest way to use heim joints is to use the female heim joint. One example is using robots in robotics competitions. Light weight is a key factors when building competitive robots, so using aluminum rods and female heim joints can be key. Another example using female heim joints is the shifter of motorcycles. The shifting mechanism allows forces to be applied linear, but still be able to work at angles when in different gears. Both male and female heim joints require the use of a lock nut after getting adjustment to correct specification needed.
| Technology | Mechanisms | null |
7966125 | https://en.wikipedia.org/wiki/HTML5 | HTML5 | HTML5 (Hypertext Markup Language 5) is a markup language used for structuring and presenting hypertext documents on the World Wide Web. It was the fifth and final major HTML version that is now a retired World Wide Web Consortium (W3C) recommendation. The current specification is known as the HTML Living Standard. It is maintained by the Web Hypertext Application Technology Working Group (WHATWG), a consortium of the major browser vendors (Apple, Google, Mozilla, and Microsoft).
HTML5 was first released in a public-facing form on 22 January 2008, with a major update and "W3C Recommendation" status in October 2014. Its goals were to improve the language with support for the latest multimedia and other new features; to keep the language both easily readable by humans and consistently understood by computers and devices such as web browsers, parsers, etc., without XHTML's rigidity; and to remain backward-compatible with older software. HTML5 is intended to subsume not only HTML 4 but also XHTML1 and even the DOM Level 2 HTML itself.
HTML5 includes detailed processing models to encourage more interoperable implementations; it extends, improves, and rationalizes the markup available for documents and introduces markup and application programming interfaces (APIs) for complex web applications. For the same reasons, HTML5 is also a candidate for cross-platform mobile applications because it includes features designed with low-powered devices in mind.
Many new syntactic features are included. To natively include and handle multimedia and graphical content, the new , and elements were added; expandable sections are natively implemented through and rather than depending on CSS or JavaScript; and support for scalable vector graphics (SVG) content and MathML for mathematical formulas was also added. To enrich the semantic content of documents, new page structure elements such as , , , , , , , and are added. New attributes were introduced, some elements and attributes were removed, and others such as , , and were changed, redefined, or standardized. The APIs and Document Object Model (DOM) are now fundamental parts of the HTML5 specification, and HTML5 also better defines the processing for any invalid documents.
History
The Web Hypertext Application Technology Working Group (WHATWG) began work on the new standard in 2004. At that time, HTML 4.01 had not been updated since 2000, and the World Wide Web Consortium (W3C) was focusing future developments on XHTML 2.0. In 2009, the W3C allowed the XHTML 2.0 Working Group's charter to expire and decided not to renew it.
The Mozilla Foundation and Opera Software presented a position paper at a World Wide Web Consortium workshop in June 2004, focusing on developing technologies that are backward-compatible with existing browsers, including an initial draft specification of Web Forms 2.0. The workshop concluded with a vote—8 for, 14 against—for continuing work on HTML. Immediately after the workshop, WHATWG was formed to start work based upon that position paper, and a second draft, Web Applications 1.0, was also announced. The two specifications were later merged to form HTML5. The HTML5 specification was adopted as the starting point of the work of the new HTML working group of the W3C in 2007.
WHATWG's Ian Hickson (Google) and David Hyatt (Apple) produced W3C's first public working draft of the specification on 22 January 2008.
Many web browsers released after 2009 support HTML5, including Google Chrome 3.0, Safari 3.1, Firefox 3.5, Opera 10.5, Internet Explorer 9 and later.
"Thoughts on Flash"
While some features of HTML5 are often compared to Adobe Flash, the two technologies are very different. Both include features for playing audio and video within web pages, and for using Scalable Vector Graphics. However, HTML5 on its own cannot be used for animation or interactivity – it must be supplemented with CSS3 or JavaScript. There are many Flash capabilities that have no direct counterpart in HTML5 (see Comparison of HTML5 and Flash). HTML5's interactive capabilities became a topic of mainstream media attention around April 2010 after Apple Inc.'s then-CEO Steve Jobs issued a public letter titled "Thoughts on Flash" in which he concluded that "Flash is no longer necessary to watch video or consume any kind of web content" and that "new open standards created in the mobile era, such as HTML5, will win". This sparked a debate in web development circles suggesting that, while HTML5 provides enhanced functionality, developers must consider the varying browser support of the different parts of the standard as well as other functionality differences between HTML5 and Flash. In early November 2011, Adobe announced that it would discontinue the development of Flash for mobile devices and reorient its efforts in developing tools using HTML5. On 25 July 2017, Adobe announced that both the distribution and support of Flash would cease by the end of 2020. Adobe itself officially discontinued Flash on 31 December 2020 and all Flash content was blocked from running in Flash Player as of 12 January 2021.
Last call, candidacy, and recommendation stages
On 14 February 2011, the W3C extended the charter of its HTML Working Group with clear milestones for HTML5. In May 2011, the working group advanced HTML5 to "Last Call", an invitation to communities inside and outside W3C to confirm the technical soundness of the specification. The W3C developed a comprehensive test suite to achieve broad interoperability for the full specification by 2014, which was the target date for recommendation. In January 2011, the WHATWG renamed its "HTML5" specification HTML Living Standard. The W3C nevertheless continued its project to release HTML5.
In July 2012, WHATWG and W3C decided on a degree of separation. W3C will continue the HTML5 specification work, focusing on a single definitive standard, which is considered a "snapshot" by WHATWG. The WHATWG organization continues its work with HTML5 as a "living standard". The concept of a living standard is that it is never complete and is always being updated and improved. New features can be added but functionality will not be removed.
In December 2012, W3C designated HTML5 as a Candidate Recommendation. The criterion for advancement to W3C Recommendation is "two 100% complete and fully interoperable implementations".
On 16 September 2014, W3C moved HTML5 to Proposed Recommendation. On 28 October 2014, HTML5 was released as a W3C Recommendation, bringing the specification process to completion. On 1 November 2016, HTML 5.1 was released as a W3C Recommendation. On 14 December 2017, HTML 5.2 was released as a W3C Recommendation.
Retirement
The W3C retired HTML5 on 27 March 2018. Additionally, the retirement included HTML 4.0, HTML 4.01, XHTML 1.0, and XHTML 1.1. HTML 5.1, HTML 5.2 and HTML 5.3 were all retired on 28 January 2021, in favour of the HTML living standard.
Timeline
The combined timelines for the W3C recommendations of HTML5, HTML 5.1, HTML 5.2 and HTML 5.3:
W3C and WHATWG conflict
The W3C ceded authority over the HTML and DOM standards to WHATWG on 28 May 2019, as it considered that having two standards is harmful. The HTML Living Standard is now authoritative. However, W3C will still participate in the development process of HTML.
Before the ceding of authority, W3C and WHATWG had been characterized as both working together on the development of HTML5, and yet also at cross purposes ever since the July 2012 split. The W3C "HTML5" standard was snapshot-based (HTML5, HTML 5.1, etc.) and static, while the WHATWG "HTML living standard" is continually updated. The relationship had been described as "fragile", even a "rift", and characterized by "squabbling".
In at least one case, namely the permissible content of the element, the two specifications directly contradicted each other ( with the W3C definition allowing a broader range of uses than the WHATWG definition.
The "Introduction" section in the WHATWG spec (edited by Ian "Hixie" Hickson) is critical of W3C, e.g. " Although we have asked them to stop doing so, the W3C also republishes some parts of this specification as separate documents." In its "History" subsection it portrays W3C as resistant to Hickson's and WHATWG's original HTML5 plans, then jumping on the bandwagon belatedly (though Hickson was in control of the W3C HTML5 spec, too). Regardless, it indicates a major philosophical divide between the organizations:
The two entities signed an agreement to work together on a single version of HTML on 28 May 2019.
Differences between the two standards
In addition to the contradiction in the element mentioned above, other differences between the two standards include at least the following, :
The following table provides data from the Mozilla Development Network on compatibility with major browsers, , of HTML elements unique to one of the standards:
Features and APIs
The W3C proposed a greater reliance on modularity as a key part of the plan to make faster progress, meaning identifying specific features, either proposed or already existing in the spec, and advancing them as separate specifications. Some technologies that were originally defined in HTML5 itself are now defined in separate specifications:
HTML Working Group — HTML Canvas 2D Context;
Immersive Web Working Group — WebXR Device API, WebXR Gamepads Module, WebXR Augmented Reality Module, and others;
Web Apps Working Group — Web Messaging, Web workers, Web storage, WebSocket, Server-sent events, Web Components (this was not part of HTML5, though); the Web Applications Working Group was closed in October 2015 and its deliverables transferred to the Web Platform Working Group (WPWG).
IETF HyBi Working Group — WebSocket Protocol;
WebRTC Working Group — WebRTC;
Web Media Text Tracks Community Group — WebVTT.
Some features that were removed from the original HTML5 specification have been standardized separately as modules, such as Microdata and Canvas. Technical specifications introduced as HTML5 extensions such as Polyglot markup have also been standardized as modules. Some W3C specifications that were originally separate specifications have been adapted as HTML5 extensions or features, such as SVG. Some features that might have slowed down the standardization of HTML5 were or will be standardized as upcoming specifications, instead.
Features
Markup
HTML5 introduces elements and attributes that reflect typical usage on modern websites. Some of them are semantic replacements for common uses of generic block () and inline () elements, for example (website navigation block), (usually referring to bottom of web page or to last lines of HTML code), or and instead of .
Some deprecated elements from HTML 4.01 have been dropped, including purely presentational elements such as and , whose effects have long been superseded by the more capable Cascading Style Sheets. There is also a renewed emphasis on the importance of client-side JavaScript used to create dynamic web pages.
The HTML5 syntax is no longer based on SGML despite the similarity of its markup. It has, however, been designed to be backward-compatible with common parsing of older versions of HTML. It comes with a new introductory line that looks like an SGML document type declaration, <!DOCTYPE html>, which triggers the standards-compliant rendering mode.
Since 5 January 2009, HTML5 also includes Web Forms 2.0, a previously separate WHATWG specification.
New APIs
In addition to specifying markup, HTML5 specifies scripting application programming interfaces (APIs) that can be used with JavaScript. Existing Document Object Model (DOM) interfaces are extended and de facto features documented. There are also new APIs, such as:
Canvas;
Timed Media Playback;
Offline;
Editable content;
Drag and drop;
History;
MIME type and protocol handler registration;
Microdata;
Web Messaging;
Web Storage – a key-value pair storage framework that provides behavior similar to cookies but with larger storage capacity and improved API.
Not all of the above technologies are included in the W3C HTML5 specification, though they are in the WHATWG HTML specification. Some related technologies, which are not part of either the W3C HTML5 or the WHATWG HTML specification, are as follows. The W3C publishes specifications for these separately:
Geolocation;
IndexedDB – an indexed hierarchical key-value store (formerly WebSimpleDB);
File – an API intended to handle file uploads and file manipulation;
Directories and System – an API intended to satisfy client-side-storage use cases not well served by databases;
File Writer – an API for writing to files from web applications;
Web Audio – a high-level JavaScript API for processing and synthesizing audio in web applications;
ClassList.
Web cryptography API
WebRTC
Web SQL Database – a local SQL Database (no longer maintained);
HTML5 cannot provide animation within web pages. Additional JavaScript or CSS3 is necessary for animating HTML elements. Animation is also possible using JavaScript and HTML 4, and within SVG elements through SMIL, although browser support of the latter remains uneven .
XHTML5 (XML-serialized HTML5)
XML documents must be served with an XML Internet media type (often called "MIME type") such as application/xhtml+xml or application/xml, and must conform to strict, well-formed syntax of XML. XHTML5 is simply XML-serialized HTML5 data (that is, HTML5 constrained to XHTML's strict requirements, e.g., not having any unclosed tags), sent with one of XML media types. HTML that has been written to conform to both the HTML and XHTML specifications and therefore produces the same DOM tree whether parsed as HTML or XML is known as polyglot markup.
There is no DTD for XHTML5.
Error handling
HTML5 is designed so that old browsers can safely ignore new HTML5 constructs. In contrast to HTML 4.01, the HTML5 specification gives detailed rules for lexing and parsing, with the intent that compliant browsers will produce the same results when parsing incorrect syntax. Although HTML5 now defines a consistent behavior for "tag soup" documents, those documents do not conform to the HTML5 standard.
Popularity
According to a report released on 30 September 2011, 34 of the world's top 100 Web sites were using HTML5the adoption led by search engines and social networks. Another report released in August 2013 has shown that 153 of the Fortune 500 U.S. companies implemented HTML5 on their corporate websites.
Since 2014, HTML5 is at least partially supported by most popular layout engines.
Differences from HTML 4.01 and XHTML 1.x
The following is a cursory list of differences and some specific examples.
New parsing rules: oriented towards flexible parsing and compatibility; not based on SGML
Ability to use inline SVG and MathML in text/html
New elements: article, aside, audio, bdi, canvas, command, data, datalist, details, embed, figcaption, figure, footer, header, keygen, mark, meter, nav, output, progress, rp, rt, ruby, section, source, summary, time, track, video, wbr
New types of form controls: dates and times, email, url, search, number, range, tel, color
New attributes: charset (on meta), async (on script)
Global attributes (that can be applied for every element): id, tabindex, hidden, data-* (custom data attributes)
Deprecated elements will be dropped altogether: acronym, applet, basefont, big, center, dir, font, frame, frameset, isindex, noframes, strike, tt
W3C Working Group publishes "HTML5 differences from HTML 4", which provides a complete outline of additions, removals and changes between HTML5 and HTML4.
Logo
On 18 January 2011, the W3C introduced a logo to represent the use of or interest in HTML5. Unlike other badges previously issued by the W3C, it does not imply validity or conformance to a certain standard. As of 1 April 2011, this logo is official.
When initially presenting it to the public, the W3C announced the HTML5 logo as a "general-purpose visual identity for a broad set of open web technologies, including HTML5, CSS, SVG, WOFF, and others". Some web standard advocates, including The Web Standards Project, criticized that definition of "HTML5" as an umbrella term, pointing out the blurring of terminology and the potential for miscommunication. Three days later, the W3C responded to community feedback and changed the logo's definition, dropping the enumeration of related technologies. The W3C then said the logo "represents HTML5, the cornerstone for modern Web applications".
Digital rights management
Industry players including the BBC, Google, Microsoft, Apple Inc. have been lobbying for the inclusion of Encrypted Media Extensions (EME), a form of digital rights management (DRM), into the HTML5 standard. As of the end of 2012 and the beginning of 2013, 27 organizations including the Free Software Foundation have started a campaign against including digital rights management in the HTML5 standard. However, in late September 2013, the W3C HTML Working Group decided that Encrypted Media Extensions, a form of DRM, was "in scope" and will potentially be included in the HTML 5.1 standard. WHATWG's "HTML Living Standard" continued to be developed without DRM-enabled proposals.
Manu Sporny, a member of the W3C, said that EME would not solve the problem it was supposed to address.
Opponents point out that EME itself is just an architecture for a DRM plug-in mechanism.
The initial enablers for DRM in HTML5 were Google and Microsoft. Supporters also include Adobe. On 14 May 2014, Mozilla announced plans to support EME in Firefox, the last major browser to avoid DRM. Calling it "a difficult and uncomfortable step", Andreas Gal of Mozilla explained that future versions of Firefox would remain open source but ship with a sandbox designed to run a content decryption module developed by Adobe, later it was replaced with Widevine module from Google which is much more widely adopted by content providers. While promising to "work on alternative solutions", Mozilla's Executive Chair Mitchell Baker stated that a refusal to implement EME would have accomplished little more than convincing many users to switch browsers. This decision was condemned by Cory Doctorow and the Free Software Foundation.
As of December 2023, the W3C has changed their opinion on EME, stating: "Encrypted Media Extensions (EME) brings greater interoperability, better privacy, security, accessibility and user experience in viewing movies and TV on the Web".
| Technology | Markup languages | null |
2493928 | https://en.wikipedia.org/wiki/Weever | Weever | Weevers (or weeverfish) are nine extant species of ray-finned fishes of the family Trachinidae in the order Perciformes, part of the wider clade Percomorpha. They are long (up to 37 cm), mainly brown in color, and have venomous spines on their first dorsal fin and gills. During the day, weevers bury themselves in sand, just showing their eyes, and snatch prey as it comes past, which consists of shrimp and small fish.
Weevers are unusual in not having swim bladders, as have most bony fish, and as a result sink as soon as they stop actively swimming. With the exception of T. cornutus from the southeast Pacific, all species in this family are restricted to the eastern Atlantic (including the Mediterranean). An extinct relative, Callipteryx, is known from the Monte Bolca lagerstätte of the Lutetian epoch.
Weevers are sometimes used as an ingredient in the recipe for bouillabaisse.
Weevers are sometimes erroneously called 'weaver fish', although the word is unrelated. In fact the word 'weever' is believed to derive from the Old French word wivre, meaning serpent or dragon, from the Latin vipera. It is sometimes also known as the viperfish, although it is not related to the viperfish proper (i.e. the stomiids of the genus Chauliodus).
In Australia sand perches of the family Mugilidae are also known as weevers.
In Portugal the weever is known as peixe-aranha, which translates to 'spider-fish', and in Catalan as aranya, which is identical to the word for 'spider'.
Species
The 9 extant species in two genera are:
Genus Echiichthys
Lesser weever, E. vipera (Cuvier, 1829)
Genus Trachinus
Spotted weever, T. araneus Cuvier, 1829
Guinean weever, T. armatus Bleeker, 1861
Sailfin weever, T. collignoni Roux, 1957
Trachinus cornutus Guichenot, 1848.
Greater weever, T. draco Linnaeus, 1758
Striped weever, T. lineolatus Fischer, 1885
Cape Verde weever, T. pellegrini Cadenat, 1937
Starry weever, T. radiatus Cuvier, 1829
Timeline
Interaction with humans
Stings: causes, frequency and prevention
Most human stings are inflicted by the lesser weever, which habitually remains buried in sandy areas of shallow water and is thus more likely to come into contact with bathers than other species (such as the greater weever, which prefers deeper water); stings from other species are generally limited to anglers and commercial fishermen. Even very shallow water (sometimes little more than damp sand) may harbour lesser weevers. The vast majority of injuries occur to the foot and are the result of stepping on buried fish; other common sites of injury are the hands and buttocks.
Stings are most common in the hours before and after low tide (especially at springs), so one possible precaution is to avoid bathing or paddling at these times. Weever stings have been known to penetrate wet suit boots even through a rubber sole (if thin), and bathers and surfers should wear sandals, "jelly shoes", or wetsuit boots with relatively hard soles, and avoid sitting or "rolling" in the shallows. Stings also increase in frequency during the summer (to a maximum in August), but this is probably the result of the greater number of bathers.
The lesser weever can be found from the southern North Sea to the Mediterranean, and is common around the south coast of the United Kingdom and Ireland, the Atlantic coast of France, Portugal and Spain, and the northern coast of the Mediterranean. The high number of bathers found on popular tourist beaches in these areas means stings are common, although individual chances of being stung are low. The South Wales Evening Post stated (on 8 August 2000) that around 40 weever stings are recorded in the Swansea and Gower area every year, but many victims do not seek medical assistance and go uncounted.
Symptoms
At first many victims believe they have simply scratched themselves on a sharp stone or shell, although this barely hurts; significant pain begins 2–3 minutes afterwards. Weever stings cause severe pain; common descriptions from victims are "extremely painful" and "much worse than a wasp (or bee) sting".
Common and minor symptoms include severe pain, itching, swelling, heat, redness, numbness, tingling, nausea, vomiting, joint aches, headaches, abdominal cramps, lightheadedness, increased urination and tremors.
Rare and severe symptoms include abnormal heart rhythms, weakness, shortness of breath, seizures, decreased blood pressure, gangrene, tissue degeneration and unconsciousness.
Treatment
Although extremely unpleasant, weever stings are not generally dangerous and the pain will ease considerably within a few hours even if untreated. Complete recovery may take a week or more; in a few cases, victims have reported swelling and/or stiffness persisting for months after envenomation.
First aid treatment consists of immersing the affected area in hot water (as hot as the victim can tolerate without being scalded), which will accelerate denaturation of the protein-based venom. The use of hot water will reduce the pain felt by the victim after a few minutes. Usual experience is that the pain then fades within 10 to 20 minutes, as the water cools. Folklore often suggests the addition of substances to the hot water, including urine, vinegar, and Epsom salts, but this is of limited or no value. Heat should be applied for at least 15 minutes, but the longer the delay (before heat is applied), the longer the treatment should be continued. Once the pain has eased, the injury should be checked for the remains of broken spines, and any found need to be removed. Over-the-counter analgesics, such as aspirin or ibuprofen, may be of assistance in management of pain and can also reduce edema.
Medical advice should be sought if any of the symptoms listed above as rare or severe are observed, if swelling spreads beyond the immediate area of injury (e.g. from hand to arm), if symptoms persist, or if any other factor causes concern. Medical treatment consists of symptom management, analgesia (often with opiates) and the same heat treatment as for first aid - more systemic treatment using histamine antagonists may assist in reducing local inflammation.
Fatalities
The only recorded death in the UK occurred in 1933, when a fisherman off Dungeness suffered multiple stings. The victim may have died of other medical causes exacerbated by the stings.
| Biology and health sciences | Acanthomorpha | Animals |
2494084 | https://en.wikipedia.org/wiki/Hair%20care | Hair care | Hair care or haircare is an overall term for hygiene and cosmetology involving the hair which grows from the human scalp, and to a lesser extent facial, pubic and other body hair. Hair care routines differ according to an individual's culture and the physical characteristics of one's hair. Hair may be colored, trimmed, shaved, plucked or otherwise removed with treatments such as waxing, sugaring and threading. Hair care services are offered in salons, barbershops and day spas, and products are available commercially for home use. Laser hair removal and electrolysis are also available, though these are provided (in the US) by licensed professionals in medical offices or speciality spas.
Hair cleaning and conditioning
Biological processes and hygiene
Care of the hair and care of the scalp skin may appear separate, but are actually intertwined because hair grows from beneath the skin. The living parts of hair (hair follicle, hair root, root sheath and sebaceous gland) are beneath the skin, while the actual hair shaft which emerges (the cuticle which covers the cortex and medulla) has no living processes. Damage or changes made to the visible hair shaft cannot be repaired by a biological process, though much can be done to manage hair and ensure that the cuticle remains intact.
Scalp skin, just like any other skin on the body, must be kept healthy to ensure a healthy body and healthy hair production. If the scalp is cleaned regularly by those who have rough hair or have a hair-fall problem, it can result in loss of hair. However, not all scalp disorders are a result of bacterial infections. Some arise inexplicably, and often only the symptoms can be treated for management of the condition (example: dandruff). There are also bacteria that can affect the hair itself. Head lice is probably the most common hair and scalp ailment worldwide. Head lice can be removed with great attention to detail, and studies show it is not necessarily associated with poor hygiene. More recent studies reveal that head lice actually thrive in clean hair. In this way, hair washing as a term may be a bit misleading, as what is necessary in healthy hair production and maintenance is often simply cleaning the surface of the scalp skin, the way the skin all over the body requires cleaning for good hygiene.
The sebaceous glands in human skin produce sebum, which is composed primarily of fatty acids. Sebum acts to protect hair and skin, and can inhibit the growth of microorganisms on the skin. Sebum contributes to the skin's slightly acidic natural pH somewhere between 5 and 6.8 on the pH spectrum. This oily substance gives hair moisture and shine as it travels naturally down the hair shaft, and serves as a protective substance by preventing the hair from drying out or absorbing excessive amounts of external substances. Even though sebum serves as a protective substance, too much of this oily substance can cause blockage around hair follicles. This blockage is usually from dandruff or even dead skin. As a result, "blocked or obstructed hair follicles" may prevent hair from producing. Sebum is also distributed down the hair shaft "mechanically" by brushing and combing. When sebum is present in excess, the roots of the hair can appear oily, greasy, and darker than normal, and the hair may stick together.
Hair cleaning
Washing hair removes excess sweat and oil, as well as unwanted products from the hair and scalp. Often hair is washed as part of a shower or bathing with shampoo, a specialized surfactant. Shampoos work by applying water and shampoo to the hair. The specific shampoo for oily or dry hair breaks the surface tension of the water, allowing the hair to become soaked. This is known as the wetting action. The wetting action is caused by the head of the shampoo molecule attracting the water to the hair shaft. Conversely, the tail of the shampoo molecule is attracted to the grease, dirt and oil on the hair shaft. The physical action of shampooing makes the grease and dirt become an emulsion that is then rinsed away with the water. This is known as the emulsifying action. Sulfate free shampoos are less harming on color treated hair than normal shampoos that contain sulfates. Sulfates strip away natural oils as well as hair dye. Sulfates are also responsible for the foaming effect of shampoos.
Shampoos have a pH of between 4 & 6. Acidic shampoos are the most common type used and maintain or improve the condition of the hair as they do not swell the hairshaft and do not strip the natural oils.
Hairstyling tools
Hairstyling equipment
Hairstyling equipment which helps in creating hairstyles include:
Hair dryer
Hair clip
Comb
Hair iron
Hair roller
Hair clipper
Hairbrush
Hairpin
Headband
Kanzashi
Ribbon
Hair tie
Scissors
Shower cap
Hair products
Cosmetics products used in creating and maintaining hairstyles include:
Hair coloring
Hair conditioner
Hair gel
Hair glue
Hair mousse
Hair serum
Hair spray
Hair tonic
Hair wax
Pomade
Hair lengths
Bald – having no hair at all on the head
Shaved – hair that is completely shaved down to the scalp
Buzz – hair that is extremely short and hardly there
Cropped – hair that is a little longer than a buzz
Short back and sides – hair that is longer than a crop, but does not yet hit the ears
Ear-length – hair reaching one's ears
Chin-level – hair that grows down to the chin
Flip-level – hair reaching the neck or shoulders
Shoulder-length – hair reaching the shoulders
Armpit-length – hair reaching the armpit
Midback-level – hair that's at about the same point as the widest part of one's ribcage and chest area
Waist-length – hair that falls at the smallest part of one's waist, a little bit above the hip bones
Hip-length – hair reaching the top of one's hips
Tailbone-length – hair that is at about the area of one's tailbone
Classic length – hair that reaches where one's legs meet the buttocks
Thigh-length – hair that is at the mid-thigh
Knee-length – hair that is at the knee
Calf-length – hair that is at the calf
Floor-length – hair that reaches the floor
Chemical alteration
Chemical alterations like perming, coloring can be carried out to change the perceived color and texture of hair. All of these are temporary alterations because permanent alterations are not possible at this time.
Chemical alteration of hair only affects the hair above the scalp; unless the hair roots are damaged, new hair will grow in with natural color and texture.
Hair coloring
Hair coloring is the process of adding pigment to or removing pigment from the hair shaft. Hair coloring processes may be referred to as coloring or bleaching, depending on whether pigment is being added or removed.
Temporary hair tints simply coat the shaft with pigments which later wash off.
Most permanent color changes require that the cuticle of the hair be opened so the color change can take place within the cuticle. This process, which uses chemicals to alter the structure of the hair, can damage the cuticle or internal structure of the hair, leaving it dry, weak, or prone to breakage. After the hair processing, the cuticle may not fully close, which results in coarse hair or an accelerated loss of pigment. Generally, the lighter the chosen color from one's initial hair color, the more damaged it may be. Other options for applying color to hair besides chemical dyes include the use of such herbs as henna and indigo, or choosing ammonia-free solutions.
Perms and chemical straightening
Perms and relaxation using relaxer or thermal reconditioning involve chemical alteration of the internal structure of the hair in order to affect its curliness or straightness. Hair that has been subjected to the use of a permanent is weaker due to the application of chemicals, and should be treated gently and with greater care than hair that isn't chemically altered.
Special considerations for hair types
Long hair
Many industries have requirements for hair being contained to prevent worker injury. This can include people working in construction, utilities, and machine shops of various sorts. Furthermore, many professions require containing the hair for reasons of public health, and a prime example is the food industry. There are also sports that may require similar constraints for safety reasons: to keep hair out of the eyes and blocking one's view, and to prevent being caught in sports equipment or trees and shrubs, or matted hair in severe weather conditions or water. Safety is usually the reason behind not allowing hair to fly loose on the backs of motorcycles and open-topped sports cars for longer tresses.
Delicate skin
Scalp skin of babies and the elderly are similar in subdued sebaceous gland production, due to hormonal levels. The sebaceous gland secretes sebum, a waxy ester, which maintains the acid mantle of the scalp and provides a coating that keeps skin supple and moist. The sebum builds overly, between every 2–3 days for the average adult. Those with delicate skin may experience a longer interval. Teenagers often require daily washing of the hair. Sebum also imparts a protective coating to hair strands. Daily washing will remove the sebum daily and incite an increase in sebum production, because the skin notices the scalp skin is lacking sufficient moisture. In cases of scalp disorders, however, this may not be the case. For babies and elderly, the sebaceous gland production is not at peak, thus daily washing is not typically needed.
Treatment of damage
Split ends
Split ends, known formally as trichoptilosis, happen when the protective cuticle has been stripped away from the ends of hair fibers.
This condition involves a longitudinal splitting of the hair fiber. Any chemical or physical trauma, such as heat, that weathers the hair may eventually lead to split ends. Typically, the damaged hair fiber splits into two or three strands and the split may be two to three centimeters in length. Split ends are most often observed in long hair but also occur in short hair that is not in good condition.
As hair grows, the natural protective oils of the scalp can fail to reach the ends of the hair. The ends are considered old once they reach about 10 centimeters since they have had long exposure to the sun, gone through many shampoos and may have been overheated by hair dryers and hot irons. This all results in dry, brittle ends which are prone to splitting. Infrequent trims and lack of hydrating treatments can intensify this condition.
Breakage and other damage
Hair can be damaged by chemical exposure, prolonged or repeated heat exposure (as through the use of heat styling tools), and by perming and straightening. Oil is harmful for rough hair and for dry scalp as it decreases nourishment for hair leading to split and hair fall. When hair behaves in an unusual way, or a scalp skin disorder arises, it is often necessary to visit not only a qualified physician, but sometimes a dermatologist, or a trichologist. Conditions that require this type of professional help include, but are not limited to, forms of alopecia, hair pulling/picking, hair that sticks straight out, black dots on the hair, and rashes or burns resulting from chemical processes.
Gel provides a shiny look but dries the hair and makes it rough.
There are a number of disorders that are particular to the scalp. Symptoms may include:
Abnormal odor
Bleeding
Bumps
Caking skin buildup that appears white or another color than one's natural skin tone
Chafes
Clumps of hair falling out
Clumpy flakes that do not easily slough off the scalp skin
Dandruff and clumps
Dry hair & scalp
Excessive itchiness that doesn't go away with a few hair wash, redness of scalp skin
Patches of thinning
Pus-like drainage
Shedding
Any of these symptoms may indicate a need for professional assistance from a dermatologist or trichologist for diagnosis.
Scalp skin can suffer from infestations of mites, lice, infections of the follicles or fungus. There could be allergic reactions to ingredients in chemical preparations applied to the hair, even ingredients from shampoo or conditioners. Common concerns surrounding dandruff (often associated with excessive sebum); psoriasis, eczema, or seborrheic dermatitis.
An odor that persists for a few weeks despite regular hair washing may be an indication of a health problem on the scalp skin.
Not all flakes are dandruff. For example, some can merely be product buildup on the scalp skin. This could result from the common practice of applying conditioner to scalp skin without washing. This would dry upon the scalp skin and flake off, appearing like dandruff and even causing itchiness, but have no health effects whatsoever.
There are various reasons for hair loss, most commonly hormonal issues. Fluctuations in hormones will often show in the hair. Not all hair loss is related to what is known as male pattern baldness, women can suffer from baldness just as men do. Formulas for addressing this specific cause of lack of hair growth yet typically they require around three months of consistent use for results to begin to appear. Cessation may also mean that gained growth may dissipate.
Particularly among women, thyroid disease is one of the more under-diagnosed health concerns. Hair falling out in clumps is one symptom of a set of symptoms that may indicate a thyroid concern. In many gynecological exams a blood screen for thyroid is now a common protocol. Thyroid often shows up first in the behavior of the hair.
During pregnancy and breast feeding, the normal and natural shedding process is typically suspended (starting around month three because it takes a while for the body to recognize and reset for the hormonal shifts the body goes through) for the period of gestation and extended longer if one breast feeds (this includes pumping for breast milk). Upon cessation of either of these, it typically takes around two months for the hormones to shift again to the normal hormonal settings, and hair shedding can increase exponentially, for approximately 3–6 months until hair returns to its normal volume. It is commonly noticed that hair seems thicker and shinier, even, during pregnancy and breast feeding in response to the influx of shifting hormones. It is not unusual also for hair color to change, or hair structure to change (e.g., straighter hair, curlier hair). These changes can occur more often than people may realize yet isn't often reported.
General hair loss
Some choose to shave their hair off entirely, while others may have an illness (such as a form of cancer—note that not every form of cancer or cancer treatment necessarily means one will lose their hair) that caused hair loss or led to a decision to shave the head.
Hair care and nutrition
Genetics and health are factors in healthy hair. Proper nutrition is important for hair health. The living part of hair is under the scalp skin where the hair root is housed in the hair follicle. The entire follicle and root are fed by a supply of arteries, and blood carries nutrients to the follicle/root. Any time an individual has any kind of health concern from stress, trauma, medications of various sorts, chronic medical conditions or medical conditions that come and then wane, heavy metals in waters and food, smoking etc. these and more can affect the hair, its growth, and its appearance.
Generally, eating a full diet that contains protein, fruits, vegetables, fat, and carbohydrates is important (several vitamins and minerals require fat in order to be delivered or absorbed by the body). Any deficiency will typically show first in the hair. A mild case of anemia can cause shedding and hair loss. Among others, the B group of vitamins are the most important for healthy hair, especially biotin. B5 (pantothenic acid) gives hair flexibility, strength and shine and helps prevent hair loss and graying. B6 helps prevent dandruff and can be found in cereals, egg yolk and liver. Vitamin B12 helps prevent the loss of hair and can be found in fish, eggs, chicken and milk.
When the body is under strain, it reprioritizes its processes. For example, the vital organs will be attended to first, meaning that healthy, oxygenated blood may not feed into the hair follicle, resulting in less healthy hair or a decline in growth rate. While not all hair growth issues stem from malnutrition, it is a valuable symptom in diagnosis.
Scalp hair grows, on average, at a rate of about 1.25 centimeters per month, and shampoos or vitamins have not been shown to noticeably change this rate. Hair growth rate also depends upon what phase in the cycle of hair growth one is actually in; there are three phases. The speed of hair growth varies based upon genetics, gender, age, hormones, and may be reduced by nutrient deficiency (i.e., anorexia, anemia, zinc deficiency) and hormonal fluctuations (i.e., menopause, polycystic ovaries, thyroid disease).
The essential omega-3 fatty acids, protein, vitamin B12, and iron, found in fish sources, prevent a dry scalp and dull hair color. Dark green vegetables contain high amounts of vitamins A and C, which help with production of sebum and provide a natural hair conditioner. Legumes provide protein to promote hair growth and also contain iron, zinc, and biotin. Biotin functions to activate certain enzymes that aid in metabolism of carbon dioxide as well as protein, fats, and carbohydrates. A deficiency in biotin intake can cause brittle hair and can lead to hair loss. In order to avoid a deficiency, individuals can find sources of biotin in cereal-grain products, liver, egg yolk, soy flour, and yeast. Nuts contain high sources of selenium and therefore are important for a healthy scalp. Alpha-linolenic acid and zinc are also found in some nuts and help condition the hair and prevent hair shedding that can be caused by a lack of zinc. Protein deficiencies or low-quality protein can produce weak and brittle hair, and can eventually result in loss of hair color. Dairy products are good sources of calcium, a key component for hair growth. A balanced diet is extremely necessary for a healthy scalp and furthermore healthy hair.
| Biology and health sciences | Hygiene and grooming: General | Health |
2494915 | https://en.wikipedia.org/wiki/Masiakasaurus | Masiakasaurus | Masiakasaurus is a genus of small predatory noasaurid theropod dinosaurs from the Late Cretaceous of Madagascar. In Malagasy, masiaka means "vicious"; thus, the genus name means "vicious lizard". The type species, Masiakasaurus knopfleri, was named after the musician Mark Knopfler, whose music inspired the expedition crew. It was named in 2001 by Scott D. Sampson, Matthew Carrano, and Catherine A. Forster. Unlike most theropods, the front teeth of M. knopfleri projected forward instead of straight down. This unique dentition suggests that they had a specialized diet, perhaps including fish and other small prey. Other bones of the skeleton indicate that Masiakasaurus were bipedal, with much shorter forelimbs than hindlimbs. M. knopfleri was a small theropod, reaching long and weighing .
Masiakasaurus lived from 72.1 to 66 million years ago, along with animals such as Majungasaurus, Rapetosaurus, and Rahonavis. Masiakasaurus was a member of the group Noasauridae, small predatory ceratosaurs found primarily in South America.
History
Remains of Masiakasaurus have been found in the Late Cretaceous Maevarano Formation in northwestern Madagascar and were first described in the journal Nature in 2001. Fragmentary bones comprising around 40% of the skeleton were collected near the village of Berivotra. Several parts of the skull, including the distinctive teeth, were found. The humerus (upper arm bone), pubis, hindlimbs, and several vertebrae were also collected.
In 2011, additional specimens of Masiakasaurus were described. The braincase, premaxilla, facial bones, ribcage, portions of the hands and pectoral girdle (coracoid), and much of the cervical and dorsal vertebral column were described for the first time. The discovery of this new material clarified many aspects of noasaurid anatomy and made the genus among the best known dinosaurs. The new finds did however not allow for a detailed study of its evolutionary relationships among ceratosaurs. With the new material, around 65% of the skeleton is currently known.
Description
Skull
The most distinctive characteristic of Masiakasaurus is the forward-projecting, or procumbent, front teeth. The teeth are heterodont, meaning that they have different shapes along the jaw. The first four dentary teeth of the lower jaw project forward, with the first tooth angled only 10° above the horizontal. These teeth are long and spoon-shaped with hooked edges. They have carinae, or sharp edges, that are weakly serrated. Serrations are more evident along the rear edge the posterior teeth in the back of the jaw, which are also recurved and laterally compressed (flattened from the side), resembling the less unusual teeth of other carnivorous dinosaurs. The margin of the dentary curves downward so that the alveoli (tooth sockets) of the front teeth are directed forward. In fact, the alveolus of the first tooth is actually situated lower than the bottom edge of the rest of the lower jaw. The lower part of the rear edge of the dentary has a long prong, known as a ventral process. This differs from the situation in abelisaurids, which have a much shorter ventral process. On the other hand, the upper part of the rear edge of the dentary is very similar to that of abelisaurids such as Majungasaurus and Carnotaurus. This part of the bone possesses an array of four small structures, three of which line a socket which connects to the surangular bone at the back of the lower jaw. Although the surangular bone is not preserved, several other bones of the lower jaw are, including a triangular angular bone, a gently curving prearticular bone, and a damaged yet notably concave articular bone. The angular and prearticular formed the lower edge of a large and rounded in the lower jaw (known as a mandibular fenestra) while the articular bone formed the lower part of the jaw joint. A long and tapering hyoid (tongue bone) has also been preserved.
The front teeth of the upper jaw are also procumbent, and the margin of the premaxilla curves slightly upward to direct them outward. Unlike the skulls of abelisaurids, which are very deep, the skull of Masiakasaurus is long and low. The lacrimal and postorbital bones around the eye are textured with bumpy projections. Not including the highly modified jaws and teeth, the skull of Masiakasaurus possesses many general ceratosaurian characteristics. Overall, its morphology is intermediate between abelisaurids and more basal ceratosaurs.
Vertebrae
The neck is relatively narrow in comparison to abelisaurids and bear stout neck ribs. While many theropods have s-shaped necks, the ribs would make the neck rather stiff in Masiakasaurus, and the back of the neck is positioned almost horizontally, giving it only a slighter curve. Like those of other abelisauroids, the vertebrae are heavily pneumaticized, or hollowed, and have relatively short neural spines. Pneumaticity is limited to the neck and foremost back vertebrae, however. Pneumatic cavities are also present in the braincase.
Forelimbs
As in other ceratosaurs, the shoulder blade (scapula) and shoulder girdle fuse into a single bone, the scapulocoracoid. This bone is very large and broad, even compared to the condition in other ceratosaurs. The scapula portion (above the glenoid, or arm socket) tapers towards the back while the coracoid portion (below the glenoid) is expanded into a curved blade-like structure. While abelisaurids have arms that are extremely reduced in size, Masiakasaurus and other noasaurids had longer forelimbs. The humerus is slender and known bones of the hand are relatively short. The related genus Noasaurus has a large and curved raptorial ungual (claw) which was originally interpreted as a sickle-like foot claw as in dromaeosaurids such as Velociraptor. More recently, this has been re-evaluated as a claw of the hand. The penultimate phalanx, the finger bone that immediately precedes the raptorial ungual in Noasaurus, is also known in Masiakasaurus and has a similar appearance. The enlarged ungual, however, is unknown in Masiakasaurus. It is assumed that members of this genus had four fingers, with the middle two fingers being the longest as in other ceratosaurians.
Classification
In its initial 2001 description, Masiakasaurus was classified as a basal abelisauroid related to Laevisuchus and Noasaurus, two poorly known genera named in 1933 and 1980, respectively. In the following year, Carrano et al. (2002) placed Masiakasaurus along with Laevisuchus and Noasaurus in the family Noasauridae. They conducted a phylogenetic analysis of abelisauroids using characteristics from Masiakasaurus. Below is a cladogram from an updated version of their analysis showing the phylogenetic placement of Masiakasaurus.
Paleobiology
Carrano et al. (2002) distinguished two forms of Masiakasaurus, a robust form and a gracile form. The robust morph includes specimens with thicker bones and more pronounced projections for the attachment of ligaments and muscles. The gracile form includes specimens that are more slender and have less pronounced muscle attachments. It also has unfused tibiae, unlike the fused tibiae of the robust form. These two varieties may be an indication of sexual dimorphism in Masiakasaurus, but they may also represent two distinct populations.
One specimen of Masiakasaurus, a right scapulocoracoid, bears holes that may be puncture marks from predation or scavenging. Majungasaurus, a large abelisaurid from the Maevarano Formation, may have preyed upon Masiakasaurus. The holes may also have been the result of an infection.
Diet
The procument front teeth of Masiakasaurus were likely an adaptation for grasping small prey. They would have been unsuitable for tearing larger food apart. In the front of the jaws, carinae are restricted to the base of the teeth and would not have been used to tear prey. The back teeth, however, share the same general characteristics as those of most other theropods, suggesting that they served a similar function in Masiakasaurus such as cutting and slicing.
Several feeding behaviors have been proposed for Masiakasaurus on the basis of its unusual dentition. Because the front teeth would have been well suited for grasping, Masiakasaurus may have consumed small vertebrates, invertebrates, and possibly even fruits.
Growth
In 2013, Lee and O'Connor observed that Masiakasaurus would be a good subject for an analysis of theropod growth, considering that there is an abundance of fossil material to examine from a broad range of ontogenetic stages. The study showed that Masiakasaurus grew determinately, and reached full maturity at a small body size. Competing theories that Masiakasaurus specimens represent the juvenile form of a larger-bodied theropod were not supported by the data. Masiakasaurus took 8 to 10 years to grow the size of a large dog. This indicates a rate of growth that is 40% slower than that of comparably sized non-avian theropods, a finding that is supported by the unusual prominence of parallel-fibered bone which is known to be associated with relatively slow growth. However, individuals in this genus grew 40% faster than crocodylians. Lee and O'Connor noted that the evolution of slow growth gave this dinosaur the advantage of minimizing the nutritional investment allocated toward structural growth while living in a semiarid and seasonally stressful environment.
| Biology and health sciences | Theropods | Animals |
18361733 | https://en.wikipedia.org/wiki/Rapid%20transit | Rapid transit | Rapid transit or mass rapid transit (MRT) or heavy rail, commonly referred to as metro, is a type of high-capacity public transport that is generally built in urban areas. A grade separated rapid transit line below ground surface through a tunnel can be regionally called a subway, tube, metro or underground. They are sometimes grade-separated on elevated railways, in which case some are referred to as el trains – short for "elevated" – or skytrains. Rapid transit systems are usually electric railways, that unlike buses or trams operate on an exclusive right-of-way, which cannot be accessed by pedestrians or other vehicles.
Modern services on rapid transit systems are provided on designated lines between stations typically using electric multiple units on railway tracks. Some systems use guided rubber tires, magnetic levitation (maglev), or monorail. The stations typically have high platforms, without steps inside the trains, requiring custom-made trains in order to minimize gaps between train and platform. They are typically integrated with other public transport and often operated by the same public transport authorities. Some rapid transit systems have at-grade intersections between a rapid transit line and a road or between two rapid transit lines.
The world's first rapid transit system was the partially underground Metropolitan Railway which opened in 1863 using steam locomotives, and now forms part of the London Underground. In 1868, New York opened the elevated West Side and Yonkers Patent Railway, initially a cable-hauled line using stationary steam engines.
, China has the largest number of rapid transit systems in the world40 in number, running on over of trackand was responsible for most of the world's rapid-transit expansion in the 2010s. The world's longest single-operator rapid transit system by route length is the Shanghai Metro. The world's largest single rapid transit service provider by number of stations (472 stations in total) is the New York City Subway. The busiest rapid transit systems in the world by annual ridership are the Shanghai Metro, Tokyo subway system, Seoul Metro and the Moscow Metro.
Terminology
The term Metro is the most commonly used term for underground rapid transit systems used by non-native English speakers. Rapid transit systems may be named after the medium by which passengers travel in busy central business districts; the use of tunnels inspires names such as subway, underground, (U-Bahn) in German, or the (T-bana) in Swedish. The use of viaducts inspires names such as elevated (L or el), skytrain, overhead, overground or in German. One of these terms may apply to an entire system, even if a large part of the network, for example, in outer suburbs, runs at ground level.
Europe
British Isles
In most of Britain, a subway is a pedestrian underpass. The terms Underground and Tube are used for the London Underground. The North East England Tyne and Wear Metro, mostly overground, is known as the Metro. In Scotland, the Glasgow Subway underground rapid transit system is known as the Subway. In Ireland, the Dublin Area Rapid Transit is despite the name considered a commuter rail due to usage of mainline railways.
Mainland
In France, large cities, such as Paris, Marseille and Lyon, use the term . Also the smaller cities of Lille and Rennes have a light metro. Furthermore, Brussels in Belgium, and Amsterdam and Rotterdam in the Netherlands also use métro or metro for their systems.
Several Southern European contries also use the term metro (Iberian Peninsula) or (Italy) for rapid transit. In Spain, such systems are present in Madrid, Barcelona, Bilbao and Valencia. In Portugal, Lisbon has a metro. The Italian cities of Catania, Genoa, Milan, Naples, Rome and Turin also have rapid transit systems.
In Germany and Austria they rapid transit is known as U-Bahn, which are often supported by S-Bahn systems. In Germany, U-Bahn systems exist in Berlin, Hamburg, Munich and Nuremberg, while in Austria such a system exists in Vienna. In addition, the small, car-free town of Serfaus in the Austrian state of Tyrol also features a short U-Bahn line. There are no U-Bahn systems in the German-speaking part of Switzerland, but the city of Lausanne has its own, small métro system. In Zurich, Switzerland's largest city, a project for a U-Bahn network was stopped by a referendum in the 1970s and instead its S-Bahn system was developed further. Other Central European countries also have metro lines, for example in the cities of Budapest (Hungary), where it is called , Prague (Czech Republic) and Warsaw (Poland) – the latter two systems also use the term metro.
In Eastern Europe, metro systems are in operation in Minsk (Belarus, called ), Kyiv (Ukraine, called ) and Moscow (Russia, called ). In Southeastern European countries, the term metro is common for rapid transit systems, which exist in Athens and Thessaloniki (Greece), Belgrade (Serbia), Sofia (Bulgaria), Istanbul (Turkey, called ) and Baku (Azerbaijan).
In Northern Europe, rapid transit systems are called metro in Copenhagen (Denmark) and Helsinki (Finland), while they are refferd to as in Oslo (Norway) and in Stockholm (Sweden).
North America
Various terms are used for rapid transit systems around North America. The term metro is a shortened reference to a metropolitan area. Rapid transit systems such as the Washington Metrorail, Los Angeles Metro Rail, the Miami Metrorail, and the Montreal Metro are generally called the Metro. In Philadelphia, the term "El" is used for the Market–Frankford Line which runs mostly on an elevated track, while the term "subway" applies to the Broad Street Line which is almost entirely underground. Chicago's commuter rail system that serves the entire metropolitan area is called Metra (short for Metropolitan Rail), while its rapid transit system that serves the city is called the "L". Boston's subway system is known locally as "The T". In Atlanta, the Metropolitan Atlanta Rapid Transit Authority goes by the acronym "MARTA." In the San Francisco Bay Area, residents refer to Bay Area Rapid Transit by its acronym "BART".
The New York City Subway is referred to simply as "the subway", despite 40% of the system running above ground. The term "L" or "El" is not used for elevated lines in general as the lines in the system are already designated with letters and numbers. The "L" train or L (New York City Subway service) refers specifically to the 14th Street–Canarsie Local line, and not other elevated trains. Similarly, the Toronto Subway is referred to as "the subway", with some of its system also running above ground. These are the only two North American systems that are primarily called "subways".
Asia
In most of Southeast Asia and in Taiwan, rapid transit systems are primarily known by the acronym MRT. The meaning varies from one country to another. In Indonesia, the acronym stands for Moda Raya Terpadu or Integrated Mass [Transit] Mode in English. In the Philippines, it stands for Metro Rail Transit. Two underground lines use the term subway. In Thailand, it stands for Metropolitan Rapid Transit, previously using the Mass Rapid Transit name. Outside of Southeast Asia, Kaohsiung and Taoyuan, Taiwan, have their own MRT systems which stands for Mass Rapid Transit, as with Singapore and Malaysia.
Broader definition
In general rapid transit is a synonym for "metro" type transit, though sometimes rapid transit is defined to include "metro", commuter trains and grade-separated light rail. Also high-capacity bus-based transit systems can have features similar to "metro" systems.
History
The opening of London's steam-hauled Metropolitan Railway in 1863 marked the beginning of rapid transit. Initial experiences with steam engines, despite ventilation, were unpleasant. Experiments with pneumatic railways failed in their extended adoption by cities.
In 1890, the City & South London Railway was the first electric-traction rapid transit railway, which was also fully underground. Prior to opening, the line was to be called the "City and South London Subway", thus introducing the term Subway into railway terminology. Both railways, alongside others, were eventually merged into London Underground. The 1893 Liverpool Overhead Railway was designed to use electric traction from the outset.
The technology quickly spread to other cities in Europe, the United States, Argentina, and Canada, with some railways being converted from steam and others being designed to be electric from the outset. Budapest, Chicago, Glasgow, Boston and New York City all converted or purpose-designed and built electric rail services.
Advancements in technology have allowed new automated services. Hybrid solutions have also evolved, such as tram-train and premetro, which incorporate some of the features of rapid transit systems. In response to cost, engineering considerations and topological challenges some cities have opted to construct tram systems, particularly those in Australia, where density in cities was low and suburbs tended to spread out. Since the 1970s, the viability of underground train systems in Australian cities, particularly Sydney and Melbourne, has been reconsidered and proposed as a solution to over-capacity. Melbourne had tunnels and stations developed in the 1970s and opened in 1980. The first line of the Sydney Metro was opened in 2019.
Since the 1960s, many new systems have been introduced in Europe, Asia and Latin America. In the 21st century, most new expansions and systems are located in Asia, with China becoming the world's leader in metro expansion, operating some of the largest and busiest systems while possessing almost 60 cities that are operating, constructing or planning a rapid transit system.
Operation
Rapid transit is used for local transport in cities, agglomerations, and metropolitan areas to transport large numbers of people often short distances at high frequency. The extent of the rapid transit system varies greatly between cities, with several transport strategies.
Some systems may extend only to the limits of the inner city, or to its inner ring of suburbs with trains making frequent station stops. The outer suburbs may then be reached by a separate commuter rail network where more widely spaced stations allow higher speeds. In some cases the differences between urban rapid transit and suburban systems are not clear.
Rapid transit systems may be supplemented by other systems such as trolleybuses, regular buses, trams, or commuter rail. This combination of transit modes serves to offset certain limitations of rapid transit such as limited stops and long walking distances between outside access points. Bus or tram feeder systems transport people to rapid transit stops.
Lines
Each rapid transit system consists of one or more lines, or circuits. Each line is serviced by at least one specific route with trains stopping at all or some of the line's stations. Most systems operate several routes, and distinguish them by colors, names, numbering, or a combination thereof. Some lines may share track with each other for a portion of their route or operate solely on their own right-of-way. Often a line running through the city center forks into two or more branches in the suburbs, allowing a higher service frequency in the center. This arrangement is used by many systems, such as the Copenhagen Metro, the Milan Metro, the Oslo Metro, the Istanbul Metro and the New York City Subway.
Alternatively, there may be a single central terminal (often shared with the central railway station), or multiple interchange stations between lines in the city center, for instance in the Prague Metro. The London Underground and Paris Métro are densely built systems with a matrix of crisscrossing lines throughout the cities. The Chicago 'L' has most of its lines converging on The Loop, the main business, financial, and cultural area. Some systems have a circular line around the city center connecting to radially arranged outward lines, such as the Moscow Metro's Koltsevaya Line and Beijing Subway's Line 10.
The capacity of a line is obtained by multiplying the car capacity, the train length, and the service frequency. Heavy rapid transit trains might have six to twelve cars, while lighter systems may use four or fewer. Cars have a capacity of 100 to 150 passengers, varying with the seated to standing ratiomore standing gives higher capacity. The minimum time interval between trains is shorter for rapid transit than for mainline railways owing to the use of communications-based train control: the minimum headway can reach 90 seconds, but many systems typically use 120 seconds to allow for recovery from delays. Typical capacity lines allow 1,200 people per train, giving 36,000 passengers per hour per direction. However, much higher capacities are attained in East Asia with ranges of 75,000 to 85,000 people per hour achieved by MTR Corporation's urban lines in Hong Kong.
Network topologies
Rapid transit topologies are determined by a large number of factors, including geographical barriers, existing or expected travel patterns, construction costs, politics, and historical constraints. A transit system is expected to serve an area of land with a set of lines, which consist of shapes summarized as "I", "L", "U", "S", and "O" shapes or loops. Geographical barriers may cause chokepoints where transit lines must converge (for example, to cross a body of water), which are potential congestion sites but also offer an opportunity for transfers between lines.
Ring lines provide good coverage, connect between the radial lines and serve tangential trips that would otherwise need to cross the typically congested core of the network. A rough grid pattern can offer a wide variety of routes while still maintaining reasonable speed and frequency of service. A study of the 15 world largest subway systems suggested a universal shape composed of a dense core with branches radiating from it.
Passenger information
Rapid transit operators have often built up strong brands, often focused on easy recognitionto allow quick identification even in the vast array of signage found in large citiescombined with the desire to communicate speed, safety, and authority. In many cities, there is a single corporate image for the entire transit authority, but the rapid transit uses its own logo that fits into the profile.
A transit map is a topological map or schematic diagram used to show the routes and stations in a public transport system. The main components are color-coded lines to indicate each line or service, with named icons to indicate stations. Maps may show only rapid transit or also include other modes of public transport. Transit maps can be found in transit vehicles, on platforms, elsewhere in stations, and in printed timetables. Maps help users understand the interconnections between different parts of the system; for example, they show the interchange stations where passengers can transfer between lines. Unlike conventional maps, transit maps are usually not geographically accurate, but emphasize the topological connections among the different stations. The graphic presentation may use straight lines and fixed angles, and often a fixed minimum distance between stations, to simplify the display of the transit network. Often this has the effect of compressing the distance between stations in the outer area of the system, and expanding distances between those close to the center.
Some systems assign unique alphanumeric codes to each of their stations to help commuters identify them, which briefly encodes information about the line it is on, and its position on the line. For example, on the Singapore MRT, Changi Airport MRT station has the alphanumeric code CG2, indicating its position as the 2nd station on the Changi Airport branch of the East West Line. Interchange stations have at least two codes, for example, Raffles Place MRT station has two codes, NS26 and EW14, the 26th station on the North South Line and the 14th station on the East West Line.
The Seoul Metro is another example that utilizes a code for its stations. Unlike that of Singapore's MRT, it is mostly numbers. Based on the line number, for example Sinyongsan station, is coded as station 429. Being on Line 4, the first number of the station code is 4. The last two numbers are the station number on that line. Interchange stations can have multiple codes. Like City Hall station in Seoul which is served by Line 1 and Line 2. It has a code of 132 and 201 respectively. The Line 2 is a circle line and the first stop is City Hall, therefore, City Hall has the station code of 201. For lines without a number like Bundang line it will have an alphanumeric code. Lines without a number that are operated by KORAIL will start with the letter 'K'.
With widespread use of the Internet and cell phones globally, transit operators now use these technologies to present information to their users. In addition to online maps and timetables, some transit operators now offer real-time information which allows passengers to know when the next vehicle will arrive, and expected travel times. The standardized GTFS data format for transit information allows many third-party software developers to produce web and smartphone app programs which give passengers customized updates regarding specific transit lines and stations of interest.
Mexico City Metro uses a unique pictogram for each station. Originally intended to help make the network map "readable" by illiterate people, this system has since become an "icon" of the system.
Safety and security
Compared to other modes of transport, rapid transit has a good safety record, with few accidents. Rail transport is subject to strict safety regulations, with requirements for procedure and maintenance to minimize risk. Head-on collisions are rare due to use of double track, and low operating speeds reduce the occurrence and severity of rear-end collisions and derailments. Fire is more of a danger underground, such as the King's Cross fire in London in November 1987, which killed 31 people. Systems are generally built to allow evacuation of trains at many places throughout the system.
High platforms, usually over 1 meter / 3 feet, are a safety risk, as people falling onto the tracks have trouble climbing back. Platform screen doors are used on some systems to eliminate this danger.
Rapid transit facilities are public spaces and may suffer from security problems: petty crimes, such as pickpocketing and baggage theft, and more serious violent crimes, as well as sexual assaults on tightly packed trains and platforms. Security measures include video surveillance, security guards, and conductors. In some countries a specialized transit police may be established. These security measures are normally integrated with measures to protect revenue by checking that passengers are not travelling without paying.
Some subway systems, such as the Beijing Subway, which is ranked by Worldwide Rapid Transit Data as the "World's Safest Rapid Transit Network" in 2015, incorporates airport-style security checkpoints at every station. Rapid transit systems have been subject to terrorism with many casualties, such as the 1995 Tokyo subway sarin gas attack and the 2005 "7/7" terrorist bombings on the London Underground.
Added features
Some rapid transport trains have extra features such as wall sockets, cellular reception, typically using a leaky feeder in tunnels and DAS antennas in stations, as well as Wi-Fi connectivity. The first metro system in the world to enable full mobile phone reception in underground stations and tunnels was Singapore's Mass Rapid Transit (MRT) system, which launched its first underground mobile phone network using AMPS in 1989. Many metro systems, such as the Hong Kong Mass Transit Railway (MTR) and the Berlin U-Bahn, provide mobile data connections in their tunnels for various network operators.
Infrastructure
The technology used for public, mass rapid transit has undergone significant changes in the years since the Metropolitan Railway opened publicly in London in 1863.
High capacity monorails with larger and longer trains can be classified as rapid transit systems. Such monorail systems recently started operating in Chongqing and São Paulo. Light metro is a subclass of rapid transit that has the speed and grade separation of a "full metro" but is designed for smaller passenger numbers. It often has smaller loading gauges, lighter train cars and smaller consists of typically two to four cars. Light metros are typically used as feeder lines into the main rapid transit system. For instance, the Wenhu Line of the Taipei Metro serves many relatively sparse neighbourhoods and feeds into and complements the high capacity metro lines.
Some systems have been built from scratch, others are reclaimed from former commuter rail or suburban tramway systems that have been upgraded, and often supplemented with an underground or elevated downtown section. Ground-level alignments with a dedicated right-of-way are typically used only outside dense areas, since they create a physical barrier in the urban fabric that hinders the flow of people and vehicles across their path and have a larger physical footprint. This method of construction is the cheapest as long as land values are low. It is often used for new systems in areas that are planned to fill up with buildings after the line is built.
Trains
Most rapid transit trains are electric multiple units with lengths from three to over ten cars. Crew sizes have decreased throughout history, with some modern systems now running completely unstaffed trains. Other trains continue to have drivers, even if their only role in normal operation is to open and close the doors of the trains at stations. Power is commonly delivered by a third rail or by overhead wires. The whole London Underground network uses fourth rail and others use the linear motor for propulsion.
Some urban rail lines are built to a loading gauge as large as that of main-line railways; others are built to a smaller one and have tunnels that restrict the size and sometimes the shape of the train compartments. One example is most of the London Underground, which has acquired the informal term "tube train" due to the cylindrical shape of the trains used on the deep tube lines.
Historically, rapid transit trains used ceiling fans and openable windows to provide fresh air and piston-effect wind cooling to riders. From the 1950s to the 1990s (and in most of Europe until the 2000s), many rapid transit trains from that era were also fitted with forced-air ventilation systems in carriage ceiling units for passenger comfort. Early rapid transit rolling stock fitted with air conditioning, such as the Hudson and Manhattan Railroad K-series cars from 1958, the New York City Subway R38 and R42 cars from the late-1960s, and the Nagoya Municipal Subway 3000 series, Osaka Municipal Subway 10 series and MTR M-Train EMUs from the 1970s, were generally only made possible largely due to the relatively generous loading gauges of these systems and also adequate open-air sections to dissipate hot air from these air conditioning units. Especially in some rapid transit systems such as the Montreal Metro (opened 1966) and Sapporo Municipal Subway (opened 1971), their entirely enclosed nature due to their use of rubber-tyred technology to cope with heavy snowfall experienced by both cities in winter precludes any air-conditioning retrofits of rolling stock due to the risk of heating the tunnels to temperatures that would be too hot for passengers and for train operations.
In many cities, metro networks consist of lines operating different sizes and types of vehicles. Although these sub-networks may not often be connected by track, in cases when it is necessary, rolling stock with a smaller loading gauge from one sub network may be transported along other lines that use larger trains. On some networks such operations are part of normal services.
Tracks
Most rapid transit systems use conventional standard gauge railway track. Since tracks in subway tunnels are not exposed to rain, snow, or other forms of precipitation, they are often fixed directly to the floor rather than resting on ballast, such as normal railway tracks.
An alternate technology, using rubber tires on narrow concrete or steel roll ways, was pioneered on certain lines of the Paris Métro and Mexico City Metro, and the first completely new system to use it was in Montreal, Canada. On most of these networks, additional horizontal wheels are required for guidance, and a conventional track is often provided in case of flat tires and for switching. There are also some rubber-tired systems that use a central guide rail, such as the Sapporo Municipal Subway and the NeoVal system in Rennes, France. Advocates of this system note that it is much quieter than conventional steel-wheeled trains, and allows for greater inclines given the increased traction of the rubber tires. However, they have higher maintenance costs and are less energy efficient. They also lose traction when weather conditions are wet or icy, preventing above-ground use of the Montréal Metro and limiting it on the Sapporo Municipal Subway, but not rubber-tired systems in other cities.
Some cities with steep hills incorporate mountain railway technologies in their metros. One of the lines of the Lyon Metro includes a section of rack (cog) railway, while the Carmelit, in Haifa, is an underground funicular.
For elevated lines, another alternative is the monorail, which can be built either as straddle-beam monorails or as a suspended monorail. While monorails have never gained wide acceptance outside Japan, there are some such as Chongqing Rail Transit's monorail lines which are widely used in a rapid transit setting.
Motive power
Although trains on very early rapid transit systems like the Metropolitan Railway were powered using steam engines, either via cable haulage or steam locomotives, nowadays virtually all metro trains use electric power and are built to run as multiple units. Power for the trains, referred to as traction power, is usually supplied via one of two forms: an overhead line, suspended from poles or towers along the track or from structure or tunnel ceilings, or a third rail mounted at track level and contacted by a sliding "pickup shoe". The practice of sending power through rails on the ground is mainly due to the limited overhead clearance of tunnels, which physically prevents the use of overhead wires.
The use of overhead wires allows higher power supply voltages to be used. Overhead wires are more likely to be used on metro systems without many tunnels, for example, the Shanghai Metro. Overhead wires are employed on some systems that are predominantly underground, as in Barcelona, Fukuoka, Hong Kong, Madrid, and Shijiazhuang. Both overhead wire and third-rail systems usually use the running rails as the return conductor. Some systems use a separate fourth rail for this purpose. There are transit lines that make use of both rail and overhead power, with vehicles able to switch between the two such as Blue Line in Boston.
Most rapid transit systems use direct current but some systems in India, including Delhi Metro use 25 kV 50 Hz supplied by overhead wires.
Tunnels
At subterranean levels, tunnels move traffic away from street level, avoiding delays caused by traffic congestion and leaving more land available for buildings and other uses. In areas of high land prices and dense land use, tunnels may be the only economic route for mass transportation. Cut-and-cover tunnels are constructed by digging up city streets, which are then rebuilt over the tunnel. Alternatively, tunnel-boring machines can be used to dig deep-bore tunnels that lie further down in bedrock.
The construction of an underground metro is an expensive project and is often carried out over a number of years. There are several different methods of building underground lines.
In one common method, known as cut-and-cover the city streets are excavated and a tunnel structure strong enough to support the road above is built in the trench, which is then filled in and the roadway rebuilt. This method often involves extensive relocation of utilities commonly buried not far below street level – particularly power and telephone wiring, water and gas mains, and sewers. This relocation must be done carefully, as according to documentaries from the National Geographic Society, one of the causes of the April 1992 explosions in Guadalajara was a mislocated water pipeline. The structures are typically made of concrete, perhaps with structural columns of steel. In the oldest systems, brick, and cast iron were used. Cut-and-cover construction can take so long that it is often necessary to build a temporary roadbed while construction is going on underneath, in order to avoid closing main streets for long periods of time.
Another tunneling method is called bored tunneling. Here, construction starts with a vertical shaft from which tunnels are horizontally dug, often with a tunneling shield, thus avoiding almost any disturbance to existing streets, buildings, and utilities. But problems with ground water are more likely, and tunneling through native bedrock may require blasting. The first city to extensively use deep tunneling was London, where a thick sedimentary layer of clay largely avoids both problems. The confined space in the tunnel also limits the machinery that can be used, but specialized tunnel-boring machines are now available to overcome this challenge.
A disadvantage with this, is that the cost of tunneling is much higher than building cut-and-cover systems, at-grade or elevated. Early tunneling machines could not make tunnels large enough for conventional railway equipment, necessitating special low, round trains, such as are still used by most of the London Underground. It cannot install air conditioning on most of its lines because the amount of empty space between the trains and tunnel walls is so small. Other lines were built with cut-and-cover and have since been equipped with air-conditioned trains.
The deepest metro system in the world was built in St. Petersburg, Russia where in the marshland, stable soil starts more than deep. Above that level, the soil mostly consists of water-bearing finely dispersed sand. Because of this, only three stations out of nearly 60 are built near ground level and three more above the ground. Some stations and tunnels lie as deep as below the surface. Usually, the vertical distance between the ground level and the rail is used to represent the depth. Among the possible candidates are:
Deepest stations:
Hongyancun station in Chongqing Metro, China (, opened in 2022)
Arsenalna station in Kyiv Metro, Ukraine (, opened 1960, built under a hill)
Admiralteyskaya (The Admiralty, , opened 2011)
Sofia station in Stockholm Metro, Sweden (ca. 100m (ca. 328 ft), opening in 2030)
Hongtudi station in Chongqing Metro, China (, opened in 2016)
Liyuchi station in Chongqing Metro, China (, opened in 2017)
Park Pobedy station in Moscow (~, opened 2005, built under a hill)
Puhung station in Pyongyang Metro, North Korea (which doubles as a nuclear shelter)
Washington Park MAX Light Rail station in Portland, Oregon, US (built under a hill), 260 feet (80 m)
An advantage of deep tunnels is that they can dip in a basin-like profile between stations, without incurring the significant extra costs associated with digging near ground level. This technique, also referred to as putting stations "on humps", allows gravity to assist the trains as they accelerate from one station and brake at the next. It was used as early as 1890 on parts of the City and South London Railway and has been used many times since, for example in Montreal or Nuremberg.
The West Island line, an extension of the MTR Island line serving western Hong Kong Island, opened in 2015, has two stations (Sai Ying Pun and HKU) situated over below ground level, to serve passengers on the Mid-levels. They have several entrances/exits equipped with high-speed lifts, instead of escalators. These kinds of exits have existed in many London Underground stations and stations in former Soviet Union nations.
Elevated railways
Elevated railways are a cheaper and easier way to build an exclusive right-of-way without digging expensive tunnels or creating barriers. In addition to street level railways they may also be the only other feasible alternative due to considerations such as a high water table close to the city surface that raises the cost of, or even precludes underground railways (e.g. Miami). Elevated guideways were popular around the beginning of the 20th century, but fell out of favor. They came back into fashion in the last quarter of the centuryoften in combination with driverless systems, for instance Vancouver's SkyTrain, London's Docklands Light Railway, the Miami Metrorail, Bangkok Skytrain, and Skyline Honolulu.
Stations
Stations function as hubs to allow passengers to board and disembark from trains. They are also payment checkpoints and allow passengers to transfer between modes of transport, for instance to buses or other trains. Access is provided via either island- or side platforms. Underground stations, especially deep-level ones, increase the overall transport time: long escalator rides to the platforms mean that the stations can become bottlenecks if not adequately built. Some underground and elevated stations are integrated into vast underground or skyway networks respectively, that connect to nearby commercial buildings. In suburbs, there may be a "park and ride" connected to the station.
To allow easy access to the trains, the platform height allows step-free access between platform and train. If the station complies with accessibility standards, it allows both disabled people and those with wheeled baggage easy access to the trains, though if the track is curved there can be a gap between the train and platform. Some stations use platform screen doors to increase safety by preventing people falling onto the tracks, as well as reducing ventilation costs.
Particularly in the former Soviet Union and other Eastern European countries, but to an increasing extent elsewhere, the stations were built with splendid decorations such as marble walls, polished granite floors and mosaics—thus exposing the public to art in their everyday life, outside galleries and museums. Moscow Metro's wall cladding contains many fossils, from corals to ammonoids and nautiluses. The systems in Moscow, St. Petersburg, Tashkent and Kyiv are widely regarded as some of the most beautiful in the world. Several other cities such as London, Stockholm, Montreal, Lisbon, Naples and Los Angeles have also focused on art, which may range from decorative wall claddings, to large, flamboyant artistic schemes integrated with station architecture, to displays of ancient artifacts recovered during station construction. It may be possible to profit by attracting more passengers by spending relatively small amounts on grand architecture, art, cleanliness, accessibility, lighting and a feeling of safety.
Crew size and automation
In the early days of underground railways, at least two staff members were needed to operate each train: one or more attendants (also called "conductor" or "guard") to operate the doors or gates, as well as a driver (also called the "engineer" or "motorman"). The introduction of powered doors around 1920 permitted crew sizes to be reduced, and trains in many cities are now operated by a single person. Where the operator would not be able to see the whole side of the train to tell whether the doors can be safely closed, mirrors or closed-circuit TV monitors are often provided for that purpose.
A replacement system for human drivers became available in the 1960s, with the advancement of computerized technologies for automatic train control and, later, automatic train operation (ATO). ATO could start a train, accelerate to the correct speed, and stop automatically in the correct position at the railway platform at the next station, while taking into account the information that a human driver would obtain from lineside or cab signals. The first metro line to use this technology in its entirety was London's Victoria line, opened in 1968.
In normal operation, a crew member sits in the driver's position at the front, but is only responsible for closing the doors at each station. By pressing two "start" buttons the train would then move automatically to the next station. This style of "semi-automatic train operation" (STO), known technically as "Grade of Automation (GoA) 2", has become widespread, especially on newly built lines like the San Francisco Bay Area's BART network.
A variant of ATO, "driverless train operation" (DTO) or technically "GoA 3", is seen on some systems, as in London's Docklands Light Railway, which opened in 1987. Here, a "passenger service agent" (formerly called "train captain") would ride with the passengers rather than sit at the front as a driver would, but would have the same responsibilities as a driver in a GoA 2 system. This technology could allow trains to operate completely automatically with no crew, just as most elevators do. When the initially increasing costs for automation began to decrease, this became a financially attractive option for the operators.
At the same time, countervailing arguments stated that in an emergency situation, a crew member on board the train would have possibly been able to prevent the emergency in the first place, drive a partially failed train to the next station, assist with an evacuation if needed, or call for the correct emergency services and help direct them to the location where the emergency occurred. In some cities, the same reasons are used to justify a crew of two rather than one; one person drives from the front of the train, while the other operates the doors from a position farther back, and is more conveniently able to assist passengers in the rear cars. An example of the presence of a driver purely due to union opposition is the Scarborough RT line in Toronto.
Completely unstaffed trains, or "unattended train operation" (UTO) or technically "GoA 4", are more accepted on newer systems where there are no existing crews to be displaced, and especially on light metro lines. One of the first such systems was the VAL (véhicule automatique léger or "automated light vehicle"), first used in 1983 on the Lille Metro in France. Additional VAL lines have been built in other cities such as Toulouse, France, and Turin, Italy. Another system that uses unstaffed trains is Bombardier's Innovia Metro, originally developed by the Urban Transportation Development Corporation as the Intermediate Capacity Transit System (ICTS). It was later used on the SkyTrain in Vancouver and the Kelana Jaya Line in Kuala Lumpur, both of which carry no crew members.
Another obstacle to conversion of existing lines to fully automated operation is that the conversion may necessitate a shutdown of operations. Furthermore, where several lines share the same infrastructure, it may be necessary to share tracks between automated and human-operated trains at least for a transitory period. The Nuremberg U-Bahn converted the existing U2 to fully automated (GoA4) in early 2010 without a single day of service disruption. Before that it had run in mixed operation with the newly opened fully driverless U3 from 2008. Nuremberg U-Bahn was the first system in the world to undertake such a transition with mixed operation and without service disruption. While this demonstrates that those technological hurdles can be overcome, the project was severely delayed, missing the target of being in operation in time for the 2006 FIFA World Cup and the hoped for international orders for the system of automation employed in Nuremberg never materialized.
Systems that use automatic trains also commonly employ full-height platform screen doors or half-height automatic platform gates in order to improve safety and ensure passenger confidence, but this is not universal, as networks like Nuremberg do not, using infrared sensors instead to detect obstacles on the track. Conversely, some lines which retain drivers or manual train operation nevertheless use PSDs, notably London's Jubilee Line Extension. The first network to install PSDs on an already operational system was Hong Kong's MTR, followed by the Singapore MRT.
As for larger trains, the Paris Métro has human drivers on most lines but runs automated trains on its newest line, Line 14, which opened in 1998. The older Line 1 was subsequently converted to unattended operation by 2012, and Line 4 in 2023. The North East MRT line in Singapore, which opened in 2003, is the world's first fully automated underground urban heavy-rail line. The MTR Disneyland Resort line is also automated, along with trains on the South Island line.
Modal tradeoffs and interconnections
Since the 1980s, trams have incorporated several features of rapid transit: light rail systems (trams) run on their own rights-of-way, thus avoiding congestion; they remain on the same level as buses and cars. Some light rail systems have elevated or underground sections. Both new and upgraded tram systems allow faster speed and higher capacity, and are a cheap alternative to construction of rapid transit, especially in smaller cities.
A premetro design means that an underground rapid transit system is built in the city center, but only a light rail or tram system in the suburbs. Conversely, other cities have opted to build a full metro in the suburbs, but run trams in city streets to save the cost of expensive tunnels. In North America, interurbans were constructed as street-running suburban trams, without the grade-separation of rapid transit. Premetros also allow a gradual upgrade of existing tramways to rapid transit, thus spreading the investment costs over time. They are most common in Germany with the name Stadtbahn.
Suburban commuter rail is a heavy rail system that operates at a lower frequency than urban rapid transit, with higher average speeds, often only serving one station in each village and town. Commuter rail systems of some cities (such as German S-Bahns, Jakarta's KRL Commuterline, Mumbai Suburban Railway, Australian suburban networks, Danish S-tog etc.) can be seen as the substitute for the city's rapid transit system providing frequent mass transit within city. In contrast, the mainly urban rapid transit systems in some cities (such as the Dubai Metro, Shanghai Metro, MetroSur of the Madrid Metro, Taipei Metro, Kuala Lumpur Rapid Transit etc.) have lines that fan out to reach the outer suburbs. With some other urban or "near urban" rapid transit systems (Guangfo Metro, Bay Area Rapid Transit, Los Teques Metro and Seoul Subway Line 7, etc.) serving bi- and multi-nucleus agglomerations.
Some cities have opted for two tiers of urban railways: an urban rapid transit system (such as the Paris Métro, Berlin U-Bahn, London Underground, Sydney Metro, Tokyo subway, Jakarta MRT and Philadelphia Subway) and a suburban system (such as their counterparts RER, S-Bahn, Crossrail & London Overground, Sydney Trains, JR Urban Lines, KRL Commuterline and Regional Rail respectively). Such systems are known variously as S-trains, suburban service, or (sometimes) regional rail. The suburban systems may have their own purpose built trackage, run at similar "rapid transit-like" frequencies, and (in many countries) are operated by the national railway company. In some cities these suburban services run through tunnels in the city center and have direct transfers to the rapid transit system, on the same or adjoining platforms.
In some cases, such as the London Underground and the London Overground, suburban and rapid transit systems even run on the exact same track along some sections. California's BART, Federal District's Metrô-DF and Washington's Metrorail system is an example of a hybrid of the two: in the suburbs the lines function like a commuter rail line, with longer intervals and longer distance between stations; in the downtown areas, the stations become closer together and many lines interline with intervals dropping to typical rapid transit headways.
Costs, benefits, and impacts
, 212 cities have built rapid transit systems. The capital cost is high, as is the risk of cost overrun and benefit shortfall; public financing is normally required. Rapid transit is sometimes seen as an alternative to an extensive road transport system with many motorways; the rapid transit system allows higher capacity with less land use, less environmental impact, and a lower cost. A 2023 study found that rapid transit systems lead to a massive reduction in emissions.
Elevated or underground systems in city centers allow the transport of people without occupying expensive land, and permit the city to develop compactly without physical barriers. Motorways often depress nearby residential land values, but proximity to a rapid transit station often triggers commercial and residential growth, with large transit oriented development office and housing blocks being constructed. Also, an efficient transit system can decrease the economic welfare loss caused by the increase of population density in a metropolis.
Rapid transit systems have high fixed costs. Most systems are publicly owned, by either local governments, transit authorities or national governments. Capital investments are often partially or completely financed by taxation, rather than by passenger fares, but must often compete with funding for roads. The transit systems may be operated by the owner or by a private company through a public service obligation. The owners of the systems often also own the connecting bus or rail systems, or are members of the local transport association, allowing for free transfers between modes. Almost all transit systems operate at a deficit, requiring fare revenue, advertising and government funding to cover costs.
The farebox recovery ratio, a ratio of ticket income to operating costs, is often used to assess operational profitability, with some systems including Hong Kong's MTR Corporation, and Taipei achieving recovery ratios of well over 100%. This ignores both heavy capital costs incurred in building the system, which are often funded with soft loans and whose servicing is excluded from calculations of profitability, as well as ancillary revenue such as income from real estate portfolios. Some systems, particularly Hong Kong's, extensions are partly financed by the sale of land whose value has appreciated by the new access the extension has brought to the area, a process known as value capture.
Urban land-use planning policies are essential for the success of rapid transit systems, particularly as mass transit is not feasible in low-density communities. Transportation planners estimate that to support rapid rail services, there must be a residential housing density of twelve dwelling units per acre.
| Technology | Trains | null |
18365750 | https://en.wikipedia.org/wiki/White%20currant | White currant | The white currant or whitecurrant is a group of cultivars of the red currant (Ribes rubrum), a species of flowering plant in the family Grossulariaceae, native to Europe.
It is sometimes mislabelled as Ribes glandulosum, called the "skunk currant" in the United States.
Description
It is a deciduous shrub growing to tall and broad, with palmate leaves, and masses of spherical, edible fruit (berries) in summer. The white currant differs from the red currant only in the colour and flavour of these fruits, which are a translucent white and sweeter.
Cultivation
Unlike their close relative the blackcurrant, red and white currants are cultivated for their ornamental value as well as their berries.
Currant bushes grow best in partial to full sunlight and can be planted between November and March in well-drained, slightly neutral to acid soil. They are considered cool-climate plants and fruit better in northern areas. They can also be grown in large containers.
The firm and juicy fruit are usually harvested in summer. Whole trusses of fruits should be cut instead of individual fruit, and then either used, or they can be stored in a fridge. They can also be bagged and frozen.
Various forms are known including 'Blanka', 'White Pearl', and 'Versailles Blanche' (syn ‘White Versailles’). 'Versailles Blanche' was first bred in France in 1843.
The cultivars 'White Grape' and 'Blanka' have gained the Royal Horticultural Society's Award of Garden Merit. There are also cultivars with yellow and pink fruit, called respectively 'yellow currants' and 'pink currants'.
The bushes can suffer from pests such as gooseberry sawfly and birds. The bushes are best grown in fruit cages for protection.
Culinary uses
White currant berries are slightly smaller and sweeter than red currants. When made into jams and jellies the result is normally pink. The white currant is actually a less pigmented cultivar of the red currant but is marketed as a different fruit.
White currants are rarely specified in savoury cooking recipes compared with their red counterparts. They are often served raw and provide a sweetly tart flavor. White currant preserves, jellies, wines and syrups are also produced. In particular, white currants are the classic ingredient in the highly regarded Bar-le-duc or Lorraine jelly although preparations made of red currants can also be found.
Nutrition
White currant berries are 84% water, 14% carbohydrates, 1% protein, and contain negligible fat (table). In a 100 gram (3.5 oz) reference amount, white currant berries supply 56 calories, and are a rich source (46% of the Daily Value, DV) of vitamin C, with no other micronutrients in appreciable amounts (table).
| Biology and health sciences | Berries | Plants |
17158563 | https://en.wikipedia.org/wiki/Sheep | Sheep | Sheep (: sheep) or domestic sheep (Ovis aries) are a domesticated, ruminant mammal typically kept as livestock. Although the term sheep can apply to other species in the genus Ovis, in everyday usage it almost always refers to domesticated sheep. Like all ruminants, sheep are members of the order Artiodactyla, the even-toed ungulates. Numbering a little over one billion, domestic sheep are also the most numerous species of sheep. An adult female is referred to as a ewe ( ), an intact male as a ram, occasionally a tup, a castrated male as a wether, and a young sheep as a lamb.
Sheep are most likely descended from the wild mouflon of Europe and Asia, with Iran being a geographic envelope of the domestication center. One of the earliest animals to be domesticated for agricultural purposes, sheep are raised for fleeces, meat (lamb, hogget or mutton), and milk. A sheep's wool is the most widely used animal fiber, and is usually harvested by shearing. In Commonwealth countries, ovine meat is called lamb when from younger animals and mutton when from older ones; in the United States, meat from both older and younger animals is usually called lamb. Sheep continue to be important for wool and meat today, and are also occasionally raised for pelts, as dairy animals, or as model organisms for science.
Sheep husbandry is practised throughout the majority of the inhabited world, and has been fundamental to many civilizations. In the modern era, Australia, New Zealand, the southern and central South American nations, and the British Isles are most closely associated with sheep production.
There is a large lexicon of unique terms for sheep husbandry which vary considerably by region and dialect. Use of the word sheep began in Middle English as a derivation of the Old English word . A group of sheep is called a flock. Many other specific terms for the various life stages of sheep exist, generally related to lambing, shearing, and age.
As a key animal in the history of farming, sheep have a deeply entrenched place in human culture, and are represented in much modern language and symbolism. As livestock, sheep are most often associated with pastoral, Arcadian imagery. Sheep figure in many mythologies—such as the Golden Fleece—and major religions, especially the Abrahamic traditions. In both ancient and modern religious ritual, sheep are used as sacrificial animals.
History
The exact line of descent from wild ancestors to domestic sheep is unclear. The most common hypothesis states that Ovis aries is descended from the Asiatic (O. gmelini) species of mouflon; the European mouflon (Ovis aries musimon) is a direct descendant of this population. Sheep were among the first animals to be domesticated by humankind (although the domestication of dogs probably took place 10 to 20 thousand years earlier); the domestication date is estimated to fall between 11,000 and 9000 BC in Mesopotamia and possibly around 7000 BC in Mehrgarh in the Indus Valley. The rearing of sheep for secondary products, and the resulting breed development, began in either southwest Asia or western Europe. Initially, sheep were kept solely for meat, milk and skins. Archaeological evidence from statuary found at sites in Iran suggests that selection for woolly sheep may have begun around 6000 BC, and the earliest woven wool garments have been dated to two to three thousand years later.
Sheep husbandry spread quickly in Europe. Excavations show that in about 6000 BC, during the Neolithic period of prehistory, the Castelnovien people, living around Châteauneuf-les-Martigues near present-day Marseille in the south of France, were among the first in Europe to keep domestic sheep. Practically from its inception, ancient Greek civilization relied on sheep as primary livestock, and were even said to name individual animals. Ancient Romans kept sheep on a wide scale, and were an important agent in the spread of sheep raising. Pliny the Elder, in his Natural History (), speaks at length about sheep and wool. European colonists spread the practice to the New World from 1493 onwards.
Characteristics
Domestic sheep are relatively small ruminants, usually with a crimped hair called wool and often with horns forming a lateral spiral. They differ from their wild relatives and ancestors in several respects, having become uniquely neotenic as a result of selective breeding by humans. A few primitive breeds of sheep retain some of the characteristics of their wild cousins, such as short tails. Depending on breed, domestic sheep may have no horns at all (i.e. polled), or horns in both sexes, or in males only. Most horned breeds have a single pair, but a few breeds may have several.
Another trait unique to domestic sheep as compared to wild ovines is their wide variation in color. Wild sheep are largely variations of brown hues, and variation within species is extremely limited. Colors of domestic sheep range from pure white to dark chocolate brown, and even spotted or piebald. Sheep keepers also sometimes artificially paint "smit marks" onto their sheep in any pattern or color for identification. Selection for easily dyeable white fleeces began early in sheep domestication, and as white wool is a dominant trait it spread quickly. However, colored sheep do appear in many modern breeds, and may even appear as a recessive trait in white flocks. While white wool is desirable for large commercial markets, there is a niche market for colored fleeces, mostly for handspinning. The nature of the fleece varies widely among the breeds, from dense and highly crimped, to long and hairlike. There is variation of wool type and quality even among members of the same flock, so wool classing is a step in the commercial processing of the fibre.
Depending on breed, sheep show a range of heights and weights. Their rate of growth and mature weight is a heritable trait that is often selected for in breeding. Ewes typically weigh between , and rams between . When all deciduous teeth have erupted, the sheep has 20 teeth. Mature sheep have 32 teeth. As with other ruminants, the front teeth in the lower jaw bite against a hard, toothless pad in the upper jaw. These are used to pick off vegetation, then the rear teeth grind it before it is swallowed. There are eight lower front teeth in ruminants, but there is some disagreement as to whether these are eight incisors, or six incisors and two incisor-shaped canines. This means that the dental formula for sheep is either or There is a large diastema between the incisors and the molars.
In the first few years of life one can calculate the age of sheep from their front teeth, as a pair of milk teeth is replaced by larger adult teeth each year, the full set of eight adult front teeth being complete at about four years of age. The front teeth are then gradually lost as sheep age, making it harder for them to feed and hindering the health and productivity of the animal. For this reason, domestic sheep on normal pasture begin to slowly decline from four years on, and the life expectancy of a sheep is 10 to 12 years, though some sheep may live as long as 20 years. Sheep have good hearing, and are sensitive to noise when being handled. Sheep have horizontal slit-shaped pupils, with excellent peripheral vision; with visual fields of about 270° to 320°, sheep can see behind themselves without turning their heads. Many breeds have only short hair on the face, and some have facial wool (if any) confined to the poll and or the area of the mandibular angle; the wide angles of peripheral vision apply to these breeds. A few breeds tend to have considerable wool on the face; for some individuals of these breeds, peripheral vision may be greatly reduced by "wool blindness", unless recently shorn about the face. Sheep have poor depth perception; shadows and dips in the ground may cause sheep to baulk. In general, sheep have a tendency to move out of the dark and into well-lit areas, and prefer to move uphill when disturbed. Sheep also have an excellent sense of smell, and, like all species of their genus, have scent glands just in front of the eyes, and interdigitally on the feet. The purpose of these glands is uncertain, but those on the face may be used in breeding behaviors. The foot glands might also be related to reproduction, but alternative functions, such as secretion of a waste product or a scent marker to help lost sheep find their flock, have also been proposed.
Comparison with goats
Sheep and goats are closely related: both are in the subfamily Caprinae. However, they are separate species, so hybrids rarely occur and are always infertile. A hybrid of a ewe and a buck (a male goat) is called a sheep-goat hybrid, known as geep. Visual differences between sheep and goats include the beard of goats and divided upper lip of sheep. Sheep tails also hang down, even when short or docked, while the short tails of goats are held upwards. Also, sheep breeds are often naturally polled (either in both sexes or just in the female), while naturally polled goats are rare (though many are polled artificially). Males of the two species differ in that buck goats acquire a unique and strong odor during the rut, whereas rams do not.
Breeds
The domestic sheep is a multi-purpose animal, and the more than 200 breeds now in existence were created to serve these diverse purposes. Some sources give a count of a thousand or more breeds, but these numbers cannot be verified, according to some sources. However, several hundred breeds of sheep have been identified by the Food and Agriculture Organization of the UN (FAO), with the estimated number varying somewhat from time to time: e.g. 863 breeds as of 1993, 1314 breeds as of 1995 and 1229 breeds as of 2006. (These numbers exclude extinct breeds, which are also tallied by the FAO.) For the purpose of such tallies, the FAO definition of a breed is "either a subspecific group of domestic livestock with definable and identifiable external characteristics that enable it to be separated by visual appraisal from other similarly defined groups within the same species or a group for which geographical and/or cultural separation from phenotypically similar groups has led to acceptance of its separate identity." Almost all sheep are classified as being best suited to furnishing a certain product: wool, meat, milk, hides, or a combination in a dual-purpose breed. Other features used when classifying sheep include face color (generally white or black), tail length, presence or lack of horns, and the topography for which the breed has been developed. This last point is especially stressed in the UK, where breeds are described as either upland (hill or mountain) or lowland breeds. A sheep may also be of a fat-tailed type, which is a dual-purpose sheep common in Africa and Asia with larger deposits of fat within and around its tail.
Breeds are often categorized by the type of their wool. Fine wool breeds are those that have wool of great crimp and density, which are preferred for textiles. Most of these were derived from Merino sheep, and the breed continues to dominate the world sheep industry. Downs breeds have wool between the extremes, and are typically fast-growing meat and ram breeds with dark faces. Some major medium wool breeds, such as the Corriedale, are dual-purpose crosses of long and fine-wooled breeds and were created for high-production commercial flocks. Long wool breeds are the largest of sheep, with long wool and a slow rate of growth. Long wool sheep are most valued for crossbreeding to improve the attributes of other sheep types. For example: the American Columbia breed was developed by crossing Lincoln rams (a long wool breed) with fine-wooled Rambouillet ewes.
Coarse or carpet wool sheep are those with a medium to long length wool of characteristic coarseness. Breeds traditionally used for carpet wool show great variability, but the chief requirement is a wool that will not break down under heavy use (as would that of the finer breeds). As the demand for carpet-quality wool declines, some breeders of this type of sheep are attempting to use a few of these traditional breeds for alternative purposes. Others have always been primarily meat-class sheep.
A minor class of sheep are the dairy breeds. Dual-purpose breeds that may primarily be meat or wool sheep are often used secondarily as milking animals, but there are a few breeds that are predominantly used for milking. These sheep produce a higher quantity of milk and have slightly longer lactation curves. In the quality of their milk, the fat and protein content percentages of dairy sheep vary from non-dairy breeds, but lactose content does not.
A last group of sheep breeds is that of fur or hair sheep, which do not grow wool at all. Hair sheep are similar to the early domesticated sheep kept before woolly breeds were developed, and are raised for meat and pelts. Some modern breeds of hair sheep, such as the Dorper, result from crosses between wool and hair breeds. For meat and hide producers, hair sheep are cheaper to keep, as they do not need shearing. Hair sheep are also more resistant to parasites and hot weather.
With the modern rise of corporate agribusiness and the decline of localized family farms, many breeds of sheep are in danger of extinction. The Rare Breeds Survival Trust of the UK lists 22 native breeds as having only 3,000 registered animals (each), and The Livestock Conservancy lists 14 as either "critical" or "threatened". Preferences for breeds with uniform characteristics and fast growth have pushed heritage (or heirloom) breeds to the margins of the sheep industry. Those that remain are maintained through the efforts of conservation organizations, breed registries, and individual farmers dedicated to their preservation.
Diet
Herbivory
Sheep are herbivorous. Most breeds prefer to graze on grass and other short roughage, avoiding the taller woody parts of plants that goats readily consume. Both sheep and goats use their lips and tongues to select parts of the plant that are easier to digest or higher in nutrition. Sheep, however, graze well in monoculture pastures where most goats fare poorly.
Like all ruminants, sheep have a complex digestive system composed of four chambers, allowing them to break down cellulose from stems, leaves, and seed hulls into simpler carbohydrates. When sheep graze, vegetation is chewed into a mass called a bolus, which is then passed into the rumen, via the reticulum. The rumen is a 19- to 38-liter (5 to 10 gallon) organ in which feed is fermented. The fermenting organisms include bacteria, fungi, and protozoa. (Other important rumen organisms include some archaea, which produce methane from carbon dioxide.) The bolus is periodically regurgitated back to the mouth as cud for additional chewing and salivation. After fermentation in the rumen, feed passes into the reticulum and the omasum; special feeds such as grains may bypass the rumen altogether. After the first three chambers, food moves into the abomasum for final digestion before processing by the intestines. The abomasum is the only one of the four chambers analogous to the human stomach, and is sometimes called the "true stomach".
Other than forage, the other staple feed for sheep is hay, often during the winter months. The ability to thrive solely on pasture (even without hay) varies with breed, but all sheep can survive on this diet. Also included in some sheep's diets are minerals, either in a trace mix or in licks. Feed provided to sheep must be specially formulated, as most cattle, poultry, pig, and even some goat feeds contain levels of copper that are lethal to sheep. The same danger applies to mineral supplements such as salt licks.
Grazing behavior
Sheep follow a diurnal pattern of activity, feeding from dawn to dusk, stopping sporadically to rest and chew their cud. Ideal pasture for sheep is not lawnlike grass, but an array of grasses, legumes and forbs. Types of land where sheep are raised vary widely, from pastures that are seeded and improved intentionally to rough, native lands. Common plants toxic to sheep are present in most of the world, and include (but are not limited to) cherry, some oaks and acorns, tomato, yew, rhubarb, potato, and rhododendron.
Sheep are largely grazing herbivores, unlike browsing animals such as goats and deer that prefer taller foliage. With a much narrower face, sheep crop plants very close to the ground and can overgraze a pasture much faster than cattle. For this reason, many shepherds use managed intensive rotational grazing, where a flock is rotated through multiple pastures, giving plants time to recover. Paradoxically, sheep can both cause and solve the spread of invasive plant species. By disturbing the natural state of pasture, sheep and other livestock can pave the way for invasive plants. However, sheep also prefer to eat invasives such as cheatgrass, leafy spurge, kudzu and spotted knapweed over native species such as sagebrush, making grazing sheep effective for conservation grazing. Research conducted in Imperial County, California compared lamb grazing with herbicides for weed control in seedling alfalfa fields. Three trials demonstrated that grazing lambs were just as effective as herbicides in controlling winter weeds. Entomologists also compared grazing lambs to insecticides for insect control in winter alfalfa. In this trial, lambs provided insect control as effectively as insecticides.
Behavior
Flock behavior
Sheep are flock animals and strongly gregarious; much sheep behavior can be understood on the basis of these tendencies. The dominance hierarchy of sheep and their natural inclination to follow a leader to new pastures were the pivotal factors in sheep being one of the first domesticated livestock species. Furthermore, in contrast to the red deer and gazelle (two other ungulates of primary importance to meat production in prehistoric times), sheep do not defend territories although they do form home ranges. All sheep have a tendency to congregate close to other members of a flock, although this behavior varies with breed, and sheep can become stressed when separated from their flock members. During flocking, sheep have a strong tendency to follow, and a leader may simply be the first individual to move. Relationships in flocks tend to be closest among related sheep: in mixed-breed flocks, subgroups of the same breed tend to form, and a ewe and her direct descendants often move as a unit within large flocks. Sheep can become hefted to one particular local pasture (heft) so they do not roam freely in unfenced landscapes. Lambs learn the heft from ewes and if whole flocks are culled it must be retaught to the replacement animals.
Flock behaviour in sheep is generally only exhibited in groups of four or more sheep; fewer sheep may not react as expected when alone or with few other sheep. Being a prey species, the primary defense mechanism of sheep is to flee from danger when their flight zone is entered. Cornered sheep may charge and butt, or threaten by hoof stamping and adopting an aggressive posture. This is particularly true for ewes with newborn lambs.
In regions where sheep have no natural predators, none of the native breeds of sheep exhibit a strong flocking behavior.
Herding
Farmers exploit flocking behavior to keep sheep together on unfenced pastures such as hill farming, and to move them more easily. For this purpose shepherds may use herding dogs in this effort, with a highly bred herding ability. Sheep are food-oriented, and association of humans with regular feeding often results in sheep soliciting people for food. Those who are moving sheep may exploit this behavior by leading sheep with buckets of feed.
Dominance hierarchy
Sheep establish a dominance hierarchy through fighting, threats and competitiveness. Dominant animals are inclined to be more aggressive with other sheep, and usually feed first at troughs. Primarily among rams, horn size is a factor in the flock hierarchy. Rams with different size horns may be less inclined to fight to establish the dominance order, while rams with similarly sized horns are more so. Merinos have an almost linear hierarchy whereas there is a less rigid structure in Border Leicesters when a competitive feeding situation arises.
In sheep, position in a moving flock is highly correlated with social dominance, but there is no definitive study to show consistent voluntary leadership by an individual sheep.
Intelligence and learning ability
Sheep are frequently thought of as unintelligent animals. Their flocking behavior and quickness to flee and panic can make shepherding a difficult endeavor for the uninitiated. Despite these perceptions, a University of Illinois monograph on sheep reported their intelligence to be just below that of pigs and on par with that of cattle. In a study published in Nature in 2001, Kenneth M. Kendrick and others reported; "Sheep recognize and are attracted to individual sheep and humans by their faces, as they possess similar specialized neural systems in the temporal and frontal lobes ... individual sheep can remember 50 other different sheep faces for over 2 years". In addition to long-term facial recognition of individuals, sheep can also differentiate emotional states through facial characteristics. If worked with patiently, sheep may learn their names, and many sheep are trained to be led by halter for showing and other purposes. Sheep have also responded well to clicker training. Sheep have been used as pack animals; Tibetan nomads distribute baggage equally throughout a flock as it is herded between living sites.
It has been reported that some sheep have apparently shown problem-solving abilities; a flock in West Yorkshire, England, allegedly found a way to get over cattle grids by rolling on their backs, although documentation of this has relied on anecdotal accounts.
Vocalisations
Sounds made by domestic sheep include bleats, grunts, rumbles and snorts. Bleating ("baaing") is used mostly for contact communication, especially between dam and lambs, but also at times between other flock members. The bleats of individual sheep are distinctive, enabling the ewe and her lambs to recognize each other's vocalizations. Vocal communication between lambs and their dam declines to a very low level within several weeks after parturition. A variety of bleats may be heard, depending on sheep age and circumstances. Apart from contact communication, bleating may signal distress, frustration or impatience; however, sheep are usually silent when in pain. Isolation commonly prompts bleating by sheep. Pregnant ewes may grunt when in labor. Rumbling sounds are made by the ram during courting; somewhat similar rumbling sounds may be made by the ewe, especially when with her neonate lambs. A snort (explosive exhalation through the nostrils) may signal aggression or a warning, and is often elicited from startled sheep.
Senses
In sheep breeds lacking facial wool, the visual field is wide. In 10 sheep (Cambridge, Lleyn and Welsh Mountain breeds, which lack facial wool), the visual field ranged from 298° to 325°, averaging 313.1°, with binocular overlap ranging from 44.5° to 74°, averaging 61.7°. In some breeds, unshorn facial wool can limit the visual field; in some individuals, this may be enough to cause "wool blindness". In 60 Merinos, visual fields ranged from 219.1° to 303.0°, averaging 269.9°, and the binocular field ranged from 8.9° to 77.7°, averaging 47.5°; 36% of the measurements were limited by wool, although photographs of the experiments indicate that only limited facial wool regrowth had occurred since shearing. In addition to facial wool (in some breeds), visual field limitations can include ears and (in some breeds) horns, so the visual field can be extended by tilting the head. Sheep eyes exhibit very low hyperopia and little astigmatism. Such visual characteristics are likely to produce a well-focused retinal image of objects in both the middle and long distance. Because sheep eyes have no accommodation, one might expect the image of very near objects to be blurred, but a rather clear near image could be provided by the tapetum and large retinal image of the sheep's eye, and adequate close vision may occur at muzzle length. Good depth perception, inferred from the sheep's sure-footedness, was confirmed in "visual cliff" experiments; behavioral responses indicating depth perception are seen in lambs at one day old. Sheep are thought to have colour vision, and can distinguish between a variety of colours: black, red, brown, green, yellow, and white.
Sight is a vital part of sheep communication, and when grazing, they maintain visual contact with each other. Each sheep lifts its head upwards to check the position of other sheep in the flock. This constant monitoring is probably what keeps the sheep in a flock as they move along grazing. Sheep become stressed when isolated; this stress is reduced if they are provided with a mirror, indicating that the sight of other sheep reduces stress.
Taste is the most important sense in sheep, establishing forage preferences, with sweet and sour plants being preferred and bitter plants being more commonly rejected. Touch and sight are also important in relation to specific plant characteristics, such as succulence and growth form. The ram uses his vomeronasal organ (sometimes called the Jacobson's organ) to sense the pheromones of ewes and detect when they are in estrus. The ewe uses her vomeronasal organ for early recognition of her neonate lamb.
Reproduction
Sheep follow a similar reproductive strategy to other herd animals. A group of ewes is generally mated by a single ram, who has either been chosen by a breeder or (in feral populations) has established dominance through physical contest with other rams. Most sheep are seasonal breeders, although some are able to breed year-round. Ewes generally reach sexual maturity at six to eight months old, and rams generally at four to six months. However, there are exceptions. For example, Finnsheep ewe lambs may reach puberty as early as 3 to 4 months, and Merino ewes sometimes reach puberty at 18 to 20 months. Ewes have estrus cycles about every 17 days, during which they emit a scent and indicate readiness through physical displays towards rams.
In feral sheep, rams may fight during the rut to determine which individuals may mate with ewes. Rams, especially unfamiliar ones, will also fight outside the breeding period to establish dominance; rams can kill one another if allowed to mix freely. During the rut, even usually friendly rams may become aggressive towards humans due to increases in their hormone levels.
After mating, sheep have a gestation period of about five months, and normal labor takes one to three hours. Although some breeds regularly throw larger litters of lambs, most produce single or twin lambs. During or soon after labor, ewes and lambs may be confined to small lambing jugs, small pens designed to aid both careful observation of ewes and to cement the bond between them and their lambs.
Ovine obstetrics can be problematic. By selectively breeding ewes that produce multiple offspring with higher birth weights for generations, sheep producers have inadvertently caused some domestic sheep to have difficulty lambing; balancing ease of lambing with high productivity is one of the dilemmas of sheep breeding. In the case of any such problems, those present at lambing may assist the ewe by extracting or repositioning lambs. After the birth, ewes ideally break the amniotic sac (if it is not broken during labor), and begin licking clean the lamb. Most lambs will begin standing within an hour of birth. In normal situations, lambs nurse after standing, receiving vital colostrum milk. Lambs that either fail to nurse or are rejected by the ewe require help to survive, such as bottle-feeding or fostering by another ewe.
Most lambs begin life being born outdoors. After lambs are several weeks old, lamb marking (ear tagging, docking, mulesing, and castrating) is carried out. Vaccinations are usually carried out at this point as well. Ear tags with numbers are attached, or ear marks are applied, for ease of later identification of sheep. Docking and castration are commonly done after 24 hours (to avoid interference with maternal bonding and consumption of colostrum) and are often done not later than one week after birth, to minimize pain, stress, recovery time and complications. The first course of vaccinations (commonly anti-clostridial) is commonly given at an age of about 10 to 12 weeks; i.e. when the concentration of maternal antibodies passively acquired via colostrum is expected to have fallen low enough to permit development of active immunity. Ewes are often revaccinated annually about 3 weeks before lambing, to provide high antibody concentrations in colostrum during the first several hours after lambing. Ram lambs that will either be slaughtered or separated from ewes before sexual maturity are not usually castrated. Objections to all these procedures have been raised by animal rights groups, but farmers defend them by saying they save money, and inflict only temporary pain.
Sheep are the only species of mammal except for humans which exhibit exclusive homosexual behavior. About 10% of rams refuse to mate with ewes but readily mate with other rams, and thirty percent of all rams demonstrate at least some homosexual behavior. Additionally, a small number of females that were accompanied by a male fetus in utero (i.e. as fraternal twins) are freemartins (female animals that are behaviorally masculine and lack functioning ovaries).
Health
Sheep may fall victim to poisons, infectious diseases, and physical injuries. As a prey species, a sheep's system is adapted to hide the obvious signs of illness, to prevent being targeted by predators. However, some signs of ill health are obvious, with sick sheep eating little, vocalizing excessively, and being generally listless. Throughout history, much of the money and labor of sheep husbandry has aimed to prevent sheep ailments. Historically, shepherds often created remedies by experimentation on the farm. In some developed countries, including the United States, sheep lack the economic importance for drug companies to perform expensive clinical trials required to approve more than a relatively limited number of drugs for ovine use. However, extra-label drug use in sheep production is permitted in many jurisdictions, subject to certain restrictions. In the US, for example, regulations governing extra-label drug use in animals are found in 21 CFR (Code of Federal Regulations) Part 530. In the 20th and 21st centuries, a minority of sheep owners have turned to alternative treatments such as homeopathy, herbalism and even traditional Chinese medicine to treat sheep veterinary problems. Despite some favorable anecdotal evidence, the effectiveness of alternative veterinary medicine has been met with skepticism in scientific journals. The need for traditional anti-parasite drugs and antibiotics is widespread, and is the main impediment to certified organic farming with sheep.
Many breeders take a variety of preventive measures to ward off problems. The first is to ensure all sheep are healthy when purchased. Many buyers avoid outlets known to be clearing houses for animals culled from healthy flocks as either sick or simply inferior. This can also mean maintaining a closed flock, and quarantining new sheep for a month. Two fundamental preventive programs are maintaining good nutrition and reducing stress in the sheep. Restraint, isolation, loud noises, novel situations, pain, heat, extreme cold, fatigue and other stressors can lead to secretion of cortisol, a stress hormone, in amounts that may indicate welfare problems. Excessive stress can compromise the immune system. "Shipping fever" (pneumonic mannheimiosis, formerly called pasteurellosis) is a disease of particular concern, that can occur as a result of stress, notably during transport and (or) handling. Pain, fear and several other stressors can cause secretion of epinephrine (adrenaline). Considerable epinephrine secretion in the final days before slaughter can adversely affect meat quality (by causing glycogenolysis, removing the substrate for normal post-slaughter acidification of meat) and result in meat becoming more susceptible to colonization by spoilage bacteria. Because of such issues, low-stress handling is essential in sheep management. Avoiding poisoning is also important; common poisons are pesticide sprays, inorganic fertilizer, motor oil, as well as radiator coolant containing ethylene glycol.
Common forms of preventive medication for sheep are vaccinations and treatments for parasites. Both external and internal parasites are the most prevalent malady in sheep, and are either fatal, or reduce the productivity of flocks. Worms are the most common internal parasites. They are ingested during grazing, incubate within the sheep, and are expelled through the digestive system (beginning the cycle again). Oral anti-parasitic medicines, known as drenches, are given to a flock to treat worms, sometimes after worm eggs in the feces has been counted to assess infestation levels. Afterwards, sheep may be moved to a new pasture to avoid ingesting the same parasites. External sheep parasites include: lice (for different parts of the body), sheep keds, nose bots, sheep itch mites, and maggots. Keds are blood-sucking parasites that cause general malnutrition and decreased productivity, but are not fatal. Maggots are those of the bot fly and the blow-fly, commonly Lucilia sericata or its relative L. cuprina. Fly maggots cause the extremely destructive condition of flystrike. Flies lay their eggs in wounds or wet, manure-soiled wool; when the maggots hatch they burrow into a sheep's flesh, eventually causing death if untreated. In addition to other treatments, crutching (shearing wool from a sheep's rump) is a common preventive method. Some countries allow mulesing, a practice that involves stripping away the skin on the rump to prevent fly-strike, normally performed when the sheep is a lamb. Nose bots are fly larvae that inhabit a sheep's sinuses, causing breathing difficulties and discomfort. Common signs are a discharge from the nasal passage, sneezing, and frantic movement such as head shaking. External parasites may be controlled through the use of backliners, sprays or immersive sheep dips.
A wide array of bacterial and viral diseases affect sheep. Diseases of the hoof, such as foot rot and foot scald may occur, and are treated with footbaths and other remedies. Foot rot is present in over 97% of flocks in the UK. These painful conditions cause lameness and hinder feeding. Ovine Johne's disease is a wasting disease that affects young sheep. Bluetongue disease is an insect-borne illness causing fever and inflammation of the mucous membranes. Ovine rinderpest (or peste des petits ruminants) is a highly contagious and often fatal viral disease affecting sheep and goats. Sheep may also be affected by primary or secondary photosensitization. Tetanus can also afflict sheep through wounds from shearing, docking, castration, or vaccination. The organism also can be introduced into the reproductive tract by unsanitary humans who assist ewes during lambing.
A few sheep conditions are transmissible to humans. Orf (also known as scabby mouth, contagious ecthyma or soremouth) is a skin disease leaving lesions that is transmitted through skin-to-skin contact. Cutaneous anthrax is also called woolsorter's disease, as the spores can be transmitted in unwashed wool. More seriously, the organisms that can cause spontaneous enzootic abortion in sheep are easily transmitted to pregnant women. Also of concern are the prion disease scrapie and the virus that causes foot-and-mouth disease (FMD), as both can devastate flocks. The latter poses a slight risk to humans. During the 2001 FMD pandemic in the UK, hundreds of sheep were culled and some rare British breeds were at risk of extinction due to this.
Of the 600,300 sheep lost to the US economy in 2004, 37.3% were lost to predators, while 26.5% were lost to some form of disease. Poisoning accounted for 1.7% of non-productive deaths.
Predators
Other than parasites and disease, predation is a threat to sheep and the profitability of sheep raising. Sheep have little ability to defend themselves, compared with other species kept as livestock. Even if sheep survive an attack, they may die from their injuries or simply from panic. However, the impact of predation varies dramatically with region. In Africa, Australia, the Americas, and parts of Europe and Asia predators are a serious problem. In the United States, for instance, over one third of sheep deaths in 2004 were caused by predation. In contrast, other nations are virtually devoid of sheep predators, particularly islands known for extensive sheep husbandry. Worldwide, canids—including the domestic dog—are responsible for most sheep deaths. Other animals that occasionally prey on sheep include: felines, bears, birds of prey, ravens and feral hogs.
Sheep producers have used a wide variety of measures to combat predation. Pre-modern shepherds used their own presence, livestock guardian dogs, and protective structures such as barns and fencing. Fencing (both regular and electric), penning sheep at night and lambing indoors all continue to be widely used. More modern shepherds used guns, traps, and poisons to kill predators, causing significant decreases in predator populations. In the wake of the environmental and conservation movements, the use of these methods now usually falls under the purview of specially designated government agencies in most developed countries.
The 1970s saw a resurgence in the use of livestock guardian dogs and the development of new methods of predator control by sheep producers, many of them non-lethal. Donkeys and guard llamas have been used since the 1980s in sheep operations, using the same basic principle as livestock guardian dogs. Interspecific pasturing, usually with larger livestock such as cattle or horses, may help to deter predators, even if such species do not actively guard sheep. In addition to animal guardians, contemporary sheep operations may use non-lethal predator deterrents such as motion-activated lights and noisy alarms.
Economic importance
Sheep are an important part of the global agricultural economy. However, their once vital status has been largely replaced by other livestock species, especially the pig, chicken, and cow. China, Australia, India, and Iran have the largest modern flocks, and serve both local and exportation needs for wool and mutton. Other countries such as New Zealand have smaller flocks but retain a large international economic impact due to their export of sheep products. Sheep also play a major role in many local economies, which may be niche markets focused on organic or sustainable agriculture and local food customers. Especially in developing countries, such flocks may be a part of subsistence agriculture rather than a system of trade. Sheep themselves may be a medium of trade in barter economies.
Domestic sheep provide a wide array of raw materials. Wool was one of the first textiles, although in the late 20th century wool prices began to fall dramatically as the result of the popularity and cheap prices for synthetic fabrics. For many sheep owners, the cost of shearing is greater than the possible profit from the fleece, making subsisting on wool production alone practically impossible without farm subsidies. Fleeces are used as material in making alternative products such as wool insulation. In the 21st century, the sale of meat is the most profitable enterprise in the sheep industry, even though far less sheep meat is consumed than chicken, pork or beef.
Sheepskin is likewise used for making clothes, footwear, rugs, and other products. Byproducts from the slaughter of sheep are also of value: sheep tallow can be used in candle and soap making, sheep bone and cartilage has been used to furnish carved items such as dice and buttons as well as rendered glue and gelatin. Sheep intestine can be formed into sausage casings, and lamb intestine has been formed into surgical sutures, as well as strings for musical instruments and tennis rackets. Sheep droppings, which are high in cellulose, have even been sterilized and mixed with traditional pulp materials to make paper. Of all sheep byproducts, perhaps the most valuable is lanolin: the waterproof, fatty substance found naturally in sheep's wool and used as a base for innumerable cosmetics and other products.
Some farmers who keep sheep also make a profit from live sheep. Providing lambs for youth programs such as 4-H and competition at agricultural shows is often a dependable avenue for the sale of sheep. Farmers may also choose to focus on a particular breed of sheep in order to sell registered purebred animals, as well as provide a ram rental service for breeding. A new option for deriving profit from live sheep is the rental of flocks for grazing; these "mowing services" are hired in order to keep unwanted vegetation down in public spaces and to lessen fire hazard.
Despite the falling demand and price for sheep products in many markets, sheep have distinct economic advantages when compared with other livestock. They do not require expensive housing, such as that used in the intensive farming of chickens or pigs. They are an efficient use of land; roughly six sheep can be kept on the amount that would suffice for a single cow or horse. Sheep can also consume plants, such as noxious weeds, that most other animals will not touch, and produce more young at a faster rate. Also, in contrast to most livestock species, the cost of raising sheep is not necessarily tied to the price of feed crops such as grain, soybeans and corn. Combined with the lower cost of quality sheep, all these factors combine to equal a lower overhead for sheep producers, thus entailing a higher profitability potential for the small farmer. Sheep are especially beneficial for independent producers, including family farms with limited resources, as the sheep industry is one of the few types of animal agriculture that has not been vertically integrated by agribusiness. However, small flocks, from 10 to 50 ewes, often are not profitable because they tend to be poorly managed. The primary reason is that mechanization is not feasible, so return per hour of labor is not maximized. Small farm flocks generally are used simply to control weeds on irrigation ditches or maintained as a hobby.
As food
Sheep meat and milk were one of the earliest staple proteins consumed by human civilization after the transition from hunting and gathering to agriculture. Sheep meat prepared for food is known as either mutton or lamb, and approximately 540 million sheep are slaughtered each year for meat worldwide. "Mutton" is derived from the Old French moton, which was the word for sheep used by the Anglo-Norman rulers of much of the British Isles in the Middle Ages. This became the name for sheep meat in English, while the Old English word sceap was kept for the live animal. Throughout modern history, "mutton" has been limited to the meat of mature sheep usually at least two years of age; "lamb" is used for that of immature sheep less than a year.
In the 21st century, the nations with the highest consumption of sheep meat are the Arab states of the Persian Gulf, New Zealand, Australia, Greece, Uruguay, the United Kingdom and Ireland. These countries eat 14–40 lbs (3–18 kg) of sheep meat per capita, per annum. Sheep meat is also popular in France, Africa (especially the Arab world), the Caribbean, the rest of the Middle East, India, and parts of China. This often reflects a history of sheep production. In these countries in particular, dishes comprising alternative cuts and offal may be popular or traditional. Sheep testicles—called animelles or lamb fries—are considered a delicacy in many parts of the world. Perhaps the most unusual dish of sheep meat is the Scottish haggis, composed of various sheep innards cooked along with oatmeal and chopped onions inside its stomach. In comparison, countries such as the U.S. consume only a pound or less (under 0.5 kg), with Americans eating 50 pounds (22 kg) of pork and 65 pounds (29 kg) of beef. In addition, such countries rarely eat mutton, and may favor the more expensive cuts of lamb: mostly lamb chops and leg of lamb.
Though sheep's milk may be drunk rarely in fresh form, today it is used predominantly in cheese and yogurt making. Sheep have only two teats, and produce a far smaller volume of milk than cows. However, as sheep's milk contains far more fat, solids, and minerals than cow's milk, it is ideal for the cheese-making process. It also resists contamination during cooling better because of its much higher calcium content. Well-known cheeses made from sheep milk include the feta of Bulgaria and Greece, Roquefort of France, Manchego from Spain, the pecorino romano (the Italian word for "sheep" is pecore) and ricotta of Italy. Yogurts, especially some forms of strained yogurt, may also be made from sheep milk. Many of these products are now often made with cow's milk, especially when produced outside their country of origin. Sheep milk contains 4.8% lactose, which may affect those who are intolerant.
As with other domestic animals, the meat of uncastrated males is inferior in quality, especially as they grow. A "bucky" lamb is a lamb which was not castrated early enough, or which was castrated improperly (resulting in one testicle being retained). These lambs are worth less at market.
In science
Sheep are generally too large and reproduce too slowly to make ideal research subjects, and thus are not a common model organism. They have, however, played an influential role in some fields of science. In particular, the Roslin Institute of Edinburgh, Scotland used sheep for genetics research that produced groundbreaking results. In 1995, two ewes named Megan and Morag were the first mammals cloned from differentiated cells, also referred to as gynomerogony. A year later, a Finnish Dorset sheep named Dolly, dubbed "the world's most famous sheep" in Scientific American, was the first mammal to be cloned from an adult somatic cell. Following this, Polly and Molly were the first mammals to be simultaneously cloned and transgenic.
As of 2008, the sheep genome has not been fully sequenced, although a detailed genetic map has been published, and a draft version of the complete genome produced by assembling sheep DNA sequences using information given by the genomes of other mammals. In 2012, a transgenic sheep named "Peng Peng" was cloned by Chinese scientists, who spliced his genes with that of a roundworm (C. elegans) in order to increase production of fats healthier for human consumption.
In the study of natural selection, the population of Soay sheep that remain on the island of Hirta have been used to explore the relation of body size and coloration to reproductive success. Soay sheep come in several colors, and researchers investigated why the larger, darker sheep were in decline; this occurrence contradicted the rule of thumb that larger members of a population tend to be more successful reproductively. The feral Soays on Hirta are especially useful subjects because they are isolated.
Domestic sheep are sometimes used in medical research, particularly for researching cardiovascular physiology, in areas such as hypertension and heart failure. Pregnant sheep are also a useful model for human pregnancy, and have been used to investigate the effects on fetal development of malnutrition and hypoxia. In behavioral sciences, sheep have been used in isolated cases for the study of facial recognition, as their mental process of recognition is qualitatively similar to humans.
In culture
Folklore and literature
Sheep have had a strong presence in many cultures, especially in areas where they form the most common type of livestock. In the English language, to call someone a sheep or ovine may allude that they are timid and easily led. In contradiction to this image, male sheep are often used as symbols of virility and power; the logos of the Los Angeles Rams football team and the Dodge Ram pickup truck allude to males of the bighorn sheep, Ovis canadensis.
Counting sheep is popularly said to be an aid to sleep, and some ancient systems of counting sheep persist today. Sheep also enter in colloquial sayings and idiom frequently with such phrases as "black sheep". To call an individual a black sheep implies that they are an odd or disreputable member of a group. This usage derives from the recessive trait that causes an occasional black lamb to be born into an entirely white flock. These black sheep were considered undesirable by shepherds, as black wool is not as commercially viable as white wool. Citizens who accept overbearing governments have been referred to by the Portmanteau neologism of sheeple. Somewhat differently, the adjective "sheepish" is also used to describe embarrassment.
In British heraldry, sheep appear in the form of rams, sheep proper and lambs. These are distinguished by the ram being depicted with horns and a tail, the sheep with neither and the lamb with its tail only. A further variant of the lamb, termed the Paschal lamb, is depicted as carrying a Christian cross and with a halo over its head. Rams' heads, portrayed without a neck and facing the viewer, are also found in British armories. The fleece, depicted as an entire sheepskin carried by a ring around its midsection, originally became known through its use in the arms of the Order of the Golden Fleece and was later adopted by towns and individuals with connections to the wool industry. In Australian English slang, "on the sheep's back" is a phrase used to allude to wool as the source of Australia’s national prosperity.
Sheep are key symbols in fables and nursery rhymes like The Wolf in Sheep's Clothing, Little Bo Peep, Baa, Baa, Black Sheep, and Mary Had a Little Lamb; novels such as George Orwell's Animal Farm and Haruki Murakami's A Wild Sheep Chase; songs such as Bach's Sheep may safely graze (Schafe können sicher weiden) and Pink Floyd's "Sheep", and poems like William Blake's "The Lamb".
Religion
In antiquity, symbolism involving sheep cropped up in religions in the ancient Near East, the Mideast, and the Mediterranean area: Çatalhöyük, ancient Egyptian religion, the Cana'anite and Phoenician tradition, Judaism, Greek religion, and others. Religious symbolism and ritual involving sheep began with some of the first known faiths: Skulls of rams (along with bulls) occupied central placement in shrines at the Çatalhöyük settlement in 8,000 BCE. In Ancient Egyptian religion, the ram was the symbol of several gods: Khnum, Heryshaf and Amun (in his incarnation as a god of fertility). Other deities occasionally shown with ram features include the goddess Ishtar, the Phoenician god Baal-Hamon, and the Babylonian god Ea-Oannes. In Madagascar, sheep were not eaten as they were believed to be incarnations of the souls of ancestors.
There are many ancient Greek references to sheep: that of Chrysomallos, the golden-fleeced ram, continuing to be told through into the modern era. Astrologically, Aries, the ram, is the first sign of the classical Greek zodiac, and the sheep is the eighth of the twelve animals associated with the 12-year cycle of in the Chinese zodiac, related to the Chinese calendar. It is said in Chinese traditions that Hou ji sacrificed sheep. Mongolia, shagai are an ancient form of dice made from the cuboid bones of sheep that are often used for fortunetelling purposes.
Sheep play an important role in all the Abrahamic faiths; Abraham, Isaac, Jacob, Moses, and King David were all shepherds. According to the Biblical story of the Binding of Isaac, a ram is sacrificed as a substitute for Isaac after an angel stays Abraham's hand (in the Islamic tradition, Abraham was about to sacrifice Ishmael). Eid al-Adha is a major annual festival in Islam in which sheep (or other animals) are sacrificed in remembrance of this act. Sheep are occasionally sacrificed to commemorate important secular events in Islamic cultures. Greeks and Romans sacrificed sheep regularly in religious practice, and Judaism once sacrificed sheep as a Korban (sacrifice), such as the Passover lamb. Ovine symbols—such as the ceremonial blowing of a shofar—still find a presence in modern Judaic traditions.
Collectively, followers of Christianity are often referred to as a flock, with Christ as the Good Shepherd, and sheep are an element in the Christian iconography of the birth of Jesus. Some Christian saints are considered patrons of shepherds, and even of sheep themselves. Christ is also portrayed as the Sacrificial lamb of God (Agnus Dei) and Easter celebrations in Greece and Romania traditionally feature a meal of Paschal lamb. A church leader is often called the pastor, which is derived from the Latin word for shepherd. In many western Christian traditions bishops carry a staff, which also serves as a symbol of the episcopal office, known as a crosier, modeled on the shepherd's crook.
| Biology and health sciences | Biology | null |
1192866 | https://en.wikipedia.org/wiki/Accommodation%20%28vertebrate%20eye%29 | Accommodation (vertebrate eye) | Accommodation is the process by which the vertebrate eye changes optical power to maintain a clear image or focus on an object as its distance varies. In this, distances vary for individuals from the far point—the maximum distance from the eye for which a clear image of an object can be seen, to the near point—the minimum distance for a clear image.
Accommodation usually acts like a reflex, including part of the accommodation-convergence reflex, but it can also be consciously controlled.
The main ways animals may change focus are:
Changing the shape of the lens.
Changing the position of the lens relative to the retina.
Changing the axial length of the eyeball.
Changing the shape of the cornea.
Focusing mechanisms
Focusing the light scattered by objects in a three dimensional environment into a two dimensional collection of individual bright points of light requires the light to be bent. To get a good image of these points of light on a defined area requires a precise systematic bending of light called refraction. The real image formed from millions of these points of light is what animals see using their retinas. Very even systematic curvature of parts of the cornea and lens produces this systematic bending of light onto the retina. Due to the nature of optics the focused image on the retina is always inverted relative to the object.
Different animals live in different environments having different refractive indexes involving water, air and often both. The eyes are therefor required to bend light different amounts leading to different mechanisms of focus being used in different environments. The air/cornea interface involves a larger difference in refractive index than hydrated structures within the eye. As a result, animals living in air have most of the bending of light achieved at the air/cornea interface with the lens being involved in finer focus of the image. Generally mammals, birds and reptiles living in air vary their eyes' optical power by subtly and precisely changing the shape of the elastic lens using the ciliary body.
The small difference in refractive index between water and the hydrated cornea means fish and amphibians need to bend the light more using the internal structures of the eye. Therefore, eyes evolved in water have a mechanism involving changing the distance between a rigid rounder more refractive lens and the retina using less uniform muscles rather than subtly changing the shape of the lens itself using circularly arranged muscles.
Land based animals and the shape changing lens
Varying forms of direct experimental proof outlined in this article show that most non-aquatic vertebrates achieve focus, at least in part, by changing the shapes of their lenses.
What is less well understood is how the subtle, precise and very quick changes in lens shape are made. Direct experimental proof of any lens model is necessarily difficult as the vertebrate lens is transparent and only functions well in the living animals. When considering vertebrates, aspects of all models may play varying roles in lens focus. The models can be broadly divided into two camps. Those models that stress the importance of external forces acting on a more passively elastic lens and other models that include forces that may be generated by the lens internally.
External forces
The model of a shape changing lens of humans was proposed by Thomas Young in a lecture on the 27th Nov 1800. Others such as Hermann von Helmholtz and Thomas Henry Huxley refined the model in the mid-1800s explaining how the ciliary muscle contracts rounding the lens to focus near. The model may be summarized like this. Normally the lens is held under tension by its suspending ligaments and capsule being pulled tight by the pressure of the eyeball. At short focal distance the ciliary muscle contracts, stretching the ciliary body and relieving some of the tension on the suspensory ligaments, allowing the elastic lens to become more spherical, increasing refractive power. Changing focus to an object at a greater distance requires a thinner less curved lens. This is achieved by relaxing some of the sphincter-like ciliary muscles allowing the ciliarly body to spring back, pulling harder on the lens making it less curved and thinner, so increasing the focal distance. There is a problem with the Helmholtz model in that despite mathematical models being tried none has come close enough to working using only the Helmholtz mechanisms.
In 1992, Ronald Schachar proposed a model for land based vertebrates that was not well received. The theory allows mathematical modeling to more accurately reflect the way the lens focuses while also taking into account the complexities in the suspensory ligaments and the presence of radial as well as circular muscles in the ciliary body. In this model the ligaments may pull to varying degrees on the lens at the equator using the radial muscles, while the ligaments offset from the equator to the front and back are relaxed to varying degrees by contracting the circular muscles. These multiple actions operating on the elastic lens allows it to change lens shape at the front more subtly. Not only changing focus, but also correcting for lens aberrations that might otherwise result from the changing shape while better fitting mathematical modeling.
The "catenary" model of lens focus proposed by Coleman demands less tension on the ligaments suspending the lens. Rather than the lens as a whole being stretched thinner for distance vision and allowed to relax for near focus, contraction of the circular ciliary muscles results in the lens having less hydrostatic pressure against its front. The lens front can then reform its shape between the suspensory ligaments in a similar way to a slack chain hanging between two poles might change its curve when the poles are moved closer together. This model requires precise fluid movement of the lens front only rather than trying to change the shape of the lens as a whole. While this concept may be involved in the focusing it has been shown by Scheimpflug photography that the rear of the lens also changes shape in the living eye.
Internal forces
When Thomas Young proposed the changing of the human lens's shape as the mechanism for focal accommodation in 1801 he thought the lens may be a muscle capable of contraction. This type of model is termed intracapsular accommodation as it relies on activity within the lens. In a 1911 Nobel lecture Allvar Gullstrand spoke on "How I found the intracapsular mechanism of accommodation" and this aspect of lens focusing continues to be investigated. Young spent time searching for the nerves that could stimulate the lens to contract without success. Since that time it has become clear the lens is not a simple muscle stimulated by a nerve so the 1909 Helmholtz model took precedence. Pre-twentieth century investigators did not have the benefit of many later discoveries and techniques. Membrane proteins such as aquaporins which allow water to flow into and out of cells are the most abundant membrane protein in the lens. Connexins which allow electrical coupling of cells are also prevalent. Electron microscopy and immunofluorescent microscopy show fiber cells to be highly variable in structure and composition. Magnetic resonance imaging confirms a layering in the lens that may allow for different refractive plans within it. The refractive index of human lens varies from approximately 1.406 in the central layers down to 1.386 in less dense layers of the lens. This index gradient enhances the optical power of the lens. As more is learned about mammalian lens structure from in situ Scheimpflug photography, MRI and physiological investigations it is becoming apparent the lens itself is not responding entirely passively to the surrounding ciliary muscle but may be able to change its overall refractive index through mechanisms involving water dynamics in the lens still to be clarified. The accompanying micrograph shows wrinkled fibers from a relaxed sheep lens after it is removed from the animal indicating shortening of the lens fibers during near focus accommodation. The age related changes in the human lens may also be related to changes in the water dynamics in the lens.
Human eyes
The young human eye can change focus from distance (infinity) to as near as 6.5 cm from the eye. This dramatic change in focal power of the eye of approximately 15 dioptres (the reciprocal of focal length in metres) occurs as a consequence of a reduction in zonular tension induced by ciliary muscle contraction. This process can occur in as little as 224 ± 30 milliseconds in bright light. The amplitude of accommodation declines with age. By the fifth decade of life the accommodative amplitude can decline so that the near point of the eye is more remote than the reading distance. When this occurs the patient is presbyopic. Once presbyopia occurs, those who are emmetropic (i.e., do not require optical correction for distance vision) will need an optical aid for near vision; those who are myopic (nearsighted and require an optical correction for distance or far vision), will find that they see better at near without their distance correction; and those who are hyperopic (farsighted) will find that they may need a correction for both distance and near vision. Note that these effects are most noticeable when the pupil is large; i.e. in dim light. The age-related decline in accommodation occurs almost universally to less than 2 dioptres by the time a person reaches 45 to 50 years, by which time most of the population will have noticed a decrease in their ability to focus on close objects and hence require glasses for reading or bifocal lenses. Accommodation decreases to about 1 dioptre at the age of 70 years. The dependency of accommodation amplitude on age is graphically summarized by Duane's classical curves.
Amplitude of accommodation
The amplitude of accommodation is a clinical measurement that describes the maximum potential increase in optical power that an eye can achieve in adjusting its focus. It refers to a certain range of object distances for which the retinal image is as sharply focused as possible. Amplitude of accommodation is measured during routine eye-examination. The closest that a normal eye can focus is typically about 10 cm for a child or young adult. Accommodation then decreases gradually with age, effectively finishing just after age fifty. The average amplitude of accommodation, in diopters, for a patient of a given age was estimated by Hofstetter in 1950 to be 18.5 − (0.30 × patient age in years) with the minimum amplitude of accommodation as 15 − (0.25 × age in years), and the maximum as 25 − (0.40 × age in years). However, Hofstetter's work was based on data from two early surveys which, although widely cited, used methodology with considerable inherent error. (Donders, Sheard, Duane, Turner for reference)
Theories on how humans focus
Helmholtz—The most widely held theory of accommodation is that proposed by Hermann von Helmholtz in 1855. When viewing a far object, the circularly arranged ciliary muscle relaxes allowing the lens zonules and suspensory ligaments to pull on the lens, flattening it. The source of the tension is the pressure that the vitreous and aqueous humours exert outwards onto the sclera. When viewing a near object, the ciliary muscles contract (resisting the outward pressure on the sclera) causing the lens zonules to slacken which allows the lens to spring back into a thicker, more convex, form.
Schachar—Ronald A. Schachar proposed in 1992 what has been called a "rather bizarre geometric theory" which claims that focus by the human lens is associated with increased tension on the lens via the equatorial zonules; that when the ciliary muscle contracts, equatorial zonular tension is increased, causing the central surfaces of the crystalline lens to steepen, the central thickness of the lens to increase (anterior-posterior diameter), and the peripheral surfaces of the lens to flatten. While the tension on equatorial zonules is increased during accommodation, the anterior and posterior zonules are simultaneously relaxing. The increased equatorial zonular tension keeps the lens stable and flattens the peripheral lens surface during accommodation. As a consequence, gravity does not affect the amplitude of accommodation and primary spherical aberration shifts in the negative direction during accommodation. The theory has not found much independent support.
Catenary—D. Jackson Coleman proposes that the lens, zonule and anterior vitreous comprise a diaphragm between the anterior and vitreous chambers of the eye. Ciliary muscle contraction initiates a pressure gradient between the vitreous and aqueous compartments that support the anterior lens shape. It is in this lens shape that the mechanically reproducible state of a steep radius of curvature in the center of the lens with slight flattening of the peripheral anterior lens, i.e. the shape, in cross section, of a catenary occurs. The anterior capsule and the zonule form a trampoline shape or hammock shaped surface that is totally reproducible depending on the circular dimensions, i.e. the diameter of the ciliary body (Müeller's muscle). The ciliary body thus directs the shape like the pylons of a suspension bridge, but does not need to support an equatorial traction force to flatten the lens.
Induced effects of accommodation
When humans accommodate to a near object, they also converge their eyes and constrict their pupils. The combination of these three movements (accommodation, convergence and miosis) is under the control of the Edinger-Westphal nucleus and is referred to as the near triad, or accommodation reflex. While it is well understood that proper convergence is necessary to prevent diplopia, the functional role of the pupillary constriction remains less clear. Arguably, it may increase the depth of field by reducing the aperture of the eye, and thus reduce the amount of accommodation needed to bring the image in focus on the retina.
There is a measurable ratio (Matthiessen's ratio) between how much convergence takes place because of accommodation (AC/A ratio, CA/C ratio). Abnormalities with this can lead to binocular vision problems.
Anomalies of accommodation described in humans
There are many types of accommodation anomalies. It can be broadly classified into two, decreased accommodation and increased accommodation. Decreased accommodation may occur due to physiological (presbyopia), pharmacological (cycloplegia) or pathological. Excessive accommodation and spasm of accommodation are types of increased accommodation.
Presbyopia
Presbyopia, physiological insufficiency of accommodation due to age related changes in lens (decreased elasticity and increased hardness) and ciliary muscle power is the commonest form of accommodative dysfunction. It will cause gradual decrease in near vision.
Accommodative insufficiency
Accommodative insufficiency is the condition where amplitude of accommodation of a person is lesser compared to physiological limits for their age. Premature sclerosis of lens or ciliary muscle weaknesses due to systemic or local cases may cause accommodative insufficiency.
Accommodative insufficiency is further categorised into different categories.
Ill-sustained accommodation
Ill-sustained accommodation is a condition similar to accommodative insufficiency. In this, range of accommodation will be normal, but after excessive near work accommodative power will decrease.
Paralysis of accommodation
In paralysis of accommodation, amplitude of accommodation is either markedly reduced or completely absent (cycloplegia). It may occur due to ciliary muscle paralysis or occulomotor nerve paralysis. Parasympatholytic drugs like atropine will also cause paralysis of accommodation.
Unequal accommodation
If there is amplitude of accommodation between the eyes differ 0.5 dioptre or more, it is considered as unequal. Organic diseases, head trauma or functional amblyopia may be responsible for unequal accommodation.
Accommodative infacility
Accommodative infacility is also known as accommodative inertia. In this condition there will be difficulty in changing accommodation from one point to other. There may be difficulty in adjusting focus from distance from near. It is a comparatively rare condition.
Spasm of accommodation
Spasm of accommodation also known as ciliary spasm is a condition of abnormally excessive accommodation which is out of voluntary control of the person. Vision may be blurred due to induced pseudomyopia.
Accommodative excess
Accommodative excess occurs when an individual uses more than normal accommodation for performing certain near work. Modern definitions simply regard it as an inability to relax accommodation readily.
Aquatic animals
Aquatic animals include some that also thrive in the air so focusing mechanisms vary more than in those that are only land based. Some whales and seals are able to focus above and below water having two areas of retina with high numbers of rods and cones rather than one as in humans. Having two high resolution area of retina presumably allows two axis of vision one for above and one for below water. In reptiles and birds, the ciliary body which supports the lens via suspensory ligaments also touches the lens with a number of pads on its inner surface. These pads compress and release the lens to modify its shape while focusing on objects at different distances; the suspensory ligaments usually perform this function in mammals. With vision in fish and amphibians, the lens is fixed in shape, and focusing is instead achieved by moving the lens forwards or backwards within the eye using a muscle called the retractor lentus.
In cartilaginous fish, the suspensory ligaments are replaced by a membrane, including a small muscle at the underside of the lens. This muscle pulls the lens forward from its relaxed position when focusing on nearby objects. In teleosts, by contrast, a muscle projects from a vascular structure in the floor of the eye, called the falciform process, and serves to pull the lens backwards from the relaxed position to focus on distant objects. While amphibians move the lens forward, as do cartilaginous fish, the muscles involved are not similar in either type of animal. In frogs, there are two muscles, one above and one below the lens, while other amphibians have only the lower muscle.
In the simplest vertebrates, the lampreys and hagfish, the lens is not attached to the outer surface of the eyeball at all. There is no aqueous humor in these fish, and the vitreous body simply presses the lens against the surface of the cornea. To focus its eyes, a lamprey flattens the cornea using muscles outside of the eye and pushes the lens backwards.
While not vertebrate, brief mention is made here of the convergent evolution of vertebrate and Molluscan eyes. The most complex Molluscan eye is the Cephalopod eye which is superficially similar structure and function to a vertebrate eye, including accommodation, while differing in basic ways such as having a two part lens and no cornea. The fundamental requirements of optics must be filled by all eyes with lenses using the tissues at their disposal so superficially eyes all tend to look similar. It is the way optical requirements are met using different cell types and structural mechanisms that varies among animals.
| Biology and health sciences | Visual system | Biology |
1193370 | https://en.wikipedia.org/wiki/Gas%20laser | Gas laser | A gas laser is a laser in which an electric current is discharged through a gas to produce coherent light. The gas laser was the first continuous-light laser and the first laser to operate on the principle of converting electrical energy to a laser light output. The first gas laser, the Helium–neon laser (HeNe), was co-invented by Iranian engineer and scientist Ali Javan and American physicist William R. Bennett, Jr., in 1960. It produced a coherent light beam in the infrared region of the spectrum at 1.15 micrometres.
Types of gas laser
Gas lasers using many gases have been built and used for many purposes.
Carbon dioxide lasers, or lasers can emit hundreds of kilowatts at 9.6 μm and 10.6 μm, and are often used in industry for cutting and welding. The efficiency of a laser is over 10%.
Carbon monoxide or "CO" lasers have the potential for very large outputs, but the use of this type of laser is limited by the toxicity of carbon monoxide gas. Human operators must be protected from this deadly gas. Furthermore, it is extremely corrosive to many materials including seals, gaskets, etc.
Helium–neon (HeNe) lasers can be made to oscillate at over 160 different wavelengths by adjusting the cavity Q to peak at the desired wavelength. This can be done by adjusting the spectral response of the mirrors or by using a dispersive element (Littrow prism) in the cavity. Units operating at 633 nm are very common in schools and laboratories because of their low cost and near-perfect beam qualities.
Nitrogen lasers operate in the ultraviolet range, typically 337.1 nm, using molecular nitrogen as its gain medium, pumped by an electrical discharge.
TEA lasers are energized by a high voltage electrical discharge in a gas mixture generally at or above atmospheric pressure. The acronym "TEA" stands for Transversely Excited Atmospheric.
Chemical lasers
Chemical lasers are powered by a chemical reaction and can achieve high powers in continuous operation. For example, in the hydrogen fluoride laser (2.7–2.9 μm) and the deuterium fluoride laser (3.8 μm) the reaction is the combination of hydrogen or deuterium gas with combustion products of ethylene in nitrogen trifluoride. They were invented by George C. Pimentel.
Chemical lasers are powered by a chemical reaction permitting a large amount of energy to be released quickly. Such very high power lasers are especially of interest to the military. Further, continuous-wave chemical lasers at very high power levels, fed by streams of gasses, have been developed and have some industrial applications.
Excimer lasers
Excimer lasers are powered by a chemical reaction involving an excited dimer, or excimer, which is a short-lived dimeric or heterodimeric molecule formed from two species (atoms), at least one of which is in an excited electronic state. They typically produce ultraviolet light, and are used in semiconductor photolithography and in LASIK eye surgery. Commonly used excimer molecules include F2 (fluorine, emitting at 157 nm), and noble gas compounds (ArF [193 nm], KrCl [222 nm], KrF [248 nm], XeCl [308 nm], and XeF [351 nm]).
Ion lasers
Argon-ion lasers emit light in the range 351–528.7 nm. Depending on the optics and the laser tube a different number of lines is usable but the most commonly used lines are 458 nm, 488 nm and 514.5 nm.
Metal-vapor lasers
Metal-vapor lasers are gas lasers that typically generate ultraviolet wavelengths. Helium-silver (HeAg) 224 nm, neon-copper (NeCu) 248 nm and helium-cadmium (HeCd) 325 nm are three examples. These lasers have particularly narrow oscillation linewidths of less than 3 GHz (500 femtometers),
making them candidates for use in fluorescence suppressed Raman spectroscopy.
The Copper vapor laser, with two spectral lines of green (510.6 nm) and yellow (578.2 nm), is the most powerful laser with the highest efficiency in the visible spectrum.
Advantages
High volume of active material
Active material is relatively inexpensive
Almost impossible to damage the active material
Heat can be removed quickly from the cavity
Applications
He-Ne laser is mainly used in making holograms.
In laser printing He-Ne laser is used as a source for writing on the photosensitive material.
He-Ne lasers were used in reading Bar Codes, which are imprinted on products in stores. They have been largely replaced by laser diodes.
Nitrogen lasers and excimer laser are used in pulsed dye laser pumping.
Ion lasers, mostly argon, are used in CW dye laser pumping.
| Technology | Lasers | null |
1194227 | https://en.wikipedia.org/wiki/Black-backed%20jackal | Black-backed jackal | The black-backed jackal (Lupulella mesomelas), also called the silver-backed jackal, is a medium-sized canine native to eastern and southern Africa. These regions are separated by roughly 900 kilometers.
One region includes the southernmost tip of the continent, including South Africa, Namibia, Botswana and Zimbabwe. The other area is along the eastern coastline, including Kenya, Somalia, Djibouti, Eritrea, and Ethiopia. It is listed on the IUCN Red List as least concern due to its widespread range and adaptability, although it is still persecuted as a livestock predator and rabies vector.
Compared to members of the genus Canis, the black-backed jackal is a very ancient species, and has changed little since the Pleistocene, being the most basal wolf-like canine, alongside the closely related side-striped jackal. It is a fox-like animal with a reddish brown to tan coat and a black saddle that extends from the shoulders to the base of the tail. It is a monogamous animal, whose young may remain with the family to help raise new generations of pups. The black-backed jackal has a wide array of food sources, feeding on small to medium-sized animals, as well as plant matter and human refuse.
Taxonomy and evolution
Johann Christian Daniel von Schreber named Canis mesomelas in 1775. It was later proposed as the genus Lupulella Hilzheimer 1906.
The black-backed jackal has occupied eastern and southern Africa for at least 2–3 million years, as shown by fossil deposits in Kenya, Tanzania, and South Africa. Specimens from fossil sites in Transvaal are almost identical to their modern counterparts, but have slightly different nasal bones. As no fossils have been found north of Ethiopia, the species likely has always been sub-Saharan in distribution. The black-backed jackal is relatively unspecialised, and can thrive in a wide variety of habitats, including deserts, as its kidneys are well adapted for water deprivation. It is, however, more adapted to a carnivorous diet than the other jackals, as shown by its well-developed carnassial shear and the longer cutting blade of the premolars.
Juliet Clutton-Brock classed the black-backed jackal as being closely related to the side-striped jackal, based on cranial and dental characters. Studies on allozyme divergence within the Canidae indicate that the black-backed jackal and other members of the genus Canis are separated by a considerable degree of genetic distance. Further studies show a large difference in mitochondrial DNA sequences between black-backed jackals and other sympatric "jackal" species, consistent with divergence 2.3–4.5 million years ago.
A mitochondrial DNA (mDNA) sequence alignment for the wolf-like canids gave a phylogenetic tree with the side-striped jackal and the black-backed jackal being the most basal members of this clade, which means that this tree is indicating an African origin for the clade.
Because of this deep divergence between the black-backed jackal and the rest of the "wolf-like" canids, one author has proposed to change the species' generic name from Canis to Lupulella.
In 2017, jackal relationships were further explored, with an mDNA study finding that the two black-backed jackal subspecies had diverged from each other 2.5 million years ago to form the south African and east African populations. The study proposes that due to this long separation, which is longer than the separation of the African golden wolf from the wolf lineage, that the two subspecies might warrant separate species status.
In 2019, members of the IUCN SSC Canid Specialist Group recommended that that the side-striped jackal (Canis adustus) and black-backed jackal (Canis mesomelas) should be placed in a distinct genus, Lupulella Hilzheimer, 1906 with the names Lupulella adusta and Lupulella mesomelas because DNA evidence shows that they form a monophyletic lineage that sits outside of the Canis/Cuon/Lycaon clade.
The phylogenetic tree for the wolf-like canids may give conflicting positions for the black-backed jackal and the side-striped jackal relative to the genus Canis members depending on whether the genetic markers were based on mitochondrial DNA or nuclear DNA (from the cell's nucleus). The explanation proposed is that mitochondrial DNA introgression occurred from an ancient ancestor of Canis into the lineage that led to the black-backed jackal around 6.2–5.2 million years ago.
Subspecies
Two subspecies are recognised by MSW3. These subspecies are geographically separated by a gap which extends northwards from Zambia to Tanzania:
Description
The black-backed jackal is a fox-like canid with a slender body, long legs, and large ears. It is similar to the closely related side-striped jackal and more distantly related to the golden jackal, though its skull and dentition are more robust and the incisors much sharper. It weighs , stands at the shoulder, and measures in body length.
The base colour is reddish brown to tan, which is particularly pronounced on the flanks and legs. A black saddle intermixed with silvery hair extends from the shoulders to the base of the tail. A long, black stripe extending along the flanks separates the saddle from the rest of the body, and can be used to differentiate individuals. The tail is bushy and tipped with black. The lips, throat, chest, and inner surface of the limbs are white. The winter coat is a much deeper reddish brown. Albino specimens occasionally occur. The hair of the face measures 10–15 mm in length, and lengthens to 30–40 mm on the rump. The guard hairs of the back are 60 mm on the shoulder, decreasing to 40 mm at the base of the tail. The hairs of the tail are the longest, measuring 70 mm in length.
Behaviour
Social and territorial behaviours
The black-backed jackal is a monogamous and territorial animal, whose social organisation greatly resembles that of the golden jackal. However, the assistance of elder offspring in helping raise the pups of their parents has a greater bearing on pup survival rates than in the latter species. The basic social unit is a monogamous mated pair which defends its territory through laying faeces and urine on range boundaries. Scent marking is usually done in tandem, and the pair aggressively expels intruders. Such encounters are normally prevented, as the pair vocalises to advertise its presence in a given area. It is a highly vocal species, particularly in Southern Africa. Sounds made by the species include yelling, yelping, woofing, whining, growling, and cackling. It communicates with group members and advertises its presence by a high-pitched, whining howl, and expresses alarm through an explosive cry followed by shorter, high-pitched yelps. This sound is particularly frantic when mobbing a leopard. In areas where the black-backed jackal is sympatric with the African golden wolf, the species does not howl, instead relying more on yelps. In contrast, black-backed jackals in Southern Africa howl much like golden jackals. When trapped, it cackles like a fox.
Reproduction and development
The mating season takes place from late May to August, with a gestation period of 60 days. Pups are born from July to October. Summer births are thought to be timed to coincide with population peaks of vlei rats and four-striped grass mice, while winter births are timed for ungulate calving seasons. Litters consist of one to nine pups, which are born blind. For the first three weeks of their lives, the pups are kept under constant surveillance by their dam, while the sire and elder offspring provide food. The pups open their eyes after 8–10 days and emerge from the den at the age of 3 weeks. They are weaned at 8–9 weeks, and can hunt by themselves at the age of 6 months. Sexual maturity is attained at 11 months, though few black-backed jackals reproduce in their first year. Unlike golden jackals, which have comparatively amicable intrapack relationships, black-backed jackal pups become increasingly quarrelsome as they age, and establish more rigid dominance hierarchies. Dominant pups appropriate food, and become independent at an earlier age. The grown pups may disperse at one year of age, though some remain in their natal territories to assist their parents in raising the next generation of pups. The average lifespan in the wild is 7 years, though captive specimens can live twice as long.
Ecology
Habitat
The species generally shows a preference for open areas with little dense vegetation, though it occupies a wide range of habitats, from arid coastal deserts to areas with more than 2000 mm of rainfall. It also occurs in farmlands, savannas, open savanna mosaics, and alpine areas.
Diet
Black-backed jackals are omnivores. Their diet includes invertebrates, such as beetles, grasshoppers, crickets, termites, millipedes, spiders, and scorpions. Mammals are eaten such as rodents, hares, and young antelopes up to the size of topi calves. They also feed on carrion, birds, bird eggs, lizards and snakes. In coastal areas, they feed on beached marine mammals, seals, fish, and mussels. They also consume occasionally fruits and berries.
In South Africa, black-backed jackals frequently prey on antelopes (primarily impala and springbok and occasionally duiker, reedbuck, and steenbok), carrion, hares, hoofed livestock, insects, and rodents. They also prey on small carnivores, such as mongooses, polecats, and wildcats. On the coastline of the Namib Desert, jackals feed primarily on marine birds (mainly Cape and white-breasted cormorants and jackass penguins), marine mammals (including Cape fur seals), fish, and insects. In East Africa, during the dry season, they hunt the young of gazelles, impalas, topi, tsessebe, and warthogs. In Serengeti woodlands, they feed heavily on African grass rats.
A single jackal is capable of killing a healthy adult impala. Adult dik-diks and Thomson's gazelles seem to be the upper limit of their killing capacity, though they target larger species if those are sick, with one pair having been observed to harass a crippled bull rhinoceros. A pair of black-backed jackals in the Kalahari desert was observed to kill a kori bustard, and on a separate occasion, a black mamba by prolonged harassment of the snake and crushing of the snake's head. They typically kill tall prey by biting at the legs and loins, and frequently go for the throat. Like most canids, the black-backed jackal caches surplus food.
The jackals sniff out the ripe melon fruits of the ǃnaras, a leafless, spined drought resilient plant using their jaws to bite through their tough skins. "The chewing molars of canids make them ideal agents for endozoochorous dispersal of large seeds." Such disperal is long-distance, the size of their home ranges (7–15.9 km). The jackals urinate on buried fruits and later return to them; it is suggested either to mark ownership or mask their smell from rival jackals. Seeds from their droppings germinate better than those extracted directly from ripe fruit. While other carnivores eat other fruits, this seems to be the first case where they might be a plant's primary dispersers.
Enemies and competitors
In areas where the black-backed jackal is sympatric with the larger side-striped jackal, the former species aggressively drives out the latter from grassland habitats into woodlands. This is unique among carnivores, as larger species commonly displace smaller ones. Black-backed jackal pups are vulnerable to African wolf, honey badger, spotted hyena and brown hyena. Adults have few natural predators, save for leopards and African wild dogs. Though there are some reports that martial eagles prey on both juveniles and adults.
Diseases and parasites
Black-backed jackals can carry diseases such as rabies, canine parvovirus, canine distemper, canine adenovirus, Ehrlichia canis, and African horse sickness. Jackals in Etosha National Park may carry anthrax. Black-backed jackals are major rabies vectors, and have been associated with epidemics, which appear to cycle every 4–8 years. Jackals in Zimbabwe are able to maintain rabies independently of other species. Although oral vaccinations are effective in jackals, the long-term control of rabies continues to be a problem in areas where stray dogs are not given the same immunisation.
Jackals may also carry trematodes such as Athesmia, cestodes such as Dipylidium caninum, Echinococcus granulosus, Joyeuxialla echinorhyncoides, J. pasqualei, Mesocestoides lineatus, Taenia erythraea, T. hydatigena, T. jackhalsi, T. multiceps, T. pungutchui, and T. serialis. Nematodes carried by black-backed jackals include Ancylostoma braziliense, A. caninum, A. martinaglia, A. somaliense, A. tubaeforme, and Physaloptera praeputialis, and protozoans such as Babesia canis, Ehrlichia canis, Hepatozoon canis, Rickettsia canis, Sarcocytis spp., Toxoplasma gondii, and Trypanosoma congolense. Mites may cause sarcoptic mange. Tick species include Amblyomma hebraeum, A. marmoreum, A. nymphs, A. variegatum, Boophilus decoloratus, Haemaphysalis leachii, H. silacea, H. spinulosa, Hyelomma spp., Ixodes pilosus, I. rubicundus, Rhipicephalus appendiculatus, R. evertsi, R. sanguineus, and R. simus. Flea species include Ctenocephalides cornatus, Echidnophaga gallinacea, and Synosternus caffer.
Relationships with humans
In folklore
Black-backed jackals feature prominently in the folklore of the Khoikhoi, where it is often paired with the lion, whom it frequently outsmarts or betrays with its superior intelligence. One story explains that the black-backed jackal gained its dark saddle when it offered to carry the Sun on its back. An alternative account comes from the ǃKung people, whose folklore tells that the jackal received the burn on its back as a punishment for its scavenging habits. According to an ancient Ethiopian folktale, jackals and man first became enemies shortly before the Great Flood, when Noah initially refused to allow jackals into Noah's Ark, thinking they were unworthy of being saved, until being commanded by God to do so.
Livestock predation
Black-backed jackals occasionally hunt domestic animals, including dogs, cats, pigs, goats, sheep, and poultry, with sheep tending to predominate. They rarely target cattle, though cows giving birth may be attacked. Jackals can be a serious problem for sheep farmers, particularly during the lambing season. Sheep losses to black-backed jackals in a 440 km2 study area in KwaZulu-Natal consisted of 0.05% of the sheep population. Of 395 sheep killed in a sheep farming area in KwaZulu-Natal, 13% were killed by jackals. Jackals usually kill sheep with a throat bite, and begin feeding by opening the flank and consuming the flesh and skin of the flank, heart, liver, some ribs, haunch of hind leg, and sometimes the stomach and its contents. In older lambs, the main portions eaten are usually heart and liver. Usually, only one lamb per night is killed in any one place, but sometimes two and occasionally three may be killed. The oral history of the Khoikhoi indicates they have been a nuisance to pastoralists long before European settlement. South Africa has been using fencing systems to protect sheep from jackals since the 1890s, though such measures have mixed success, as the best fencing is expensive, and jackals can easily infiltrate cheap wire fences.
Hunting
Due to livestock losses to jackals, many hunting clubs were opened in South Africa in the 1850s. Black-backed jackals have never been successfully eradicated in hunting areas, despite strenuous attempts to do so with dogs, poison, and gas. Black-backed jackal coursing was first introduced to the Cape Colony in the 1820s by Lord Charles Somerset, who as an avid fox hunter, sought a more effective method of managing jackal populations, as shooting proved ineffective. Coursing jackals also became a popular pastime in the Boer Republics. In the western Cape in the early 20th century, dogs bred by crossing foxhounds, lurchers, and borzoi were used.
Spring traps with metal jaws were also effective, though poisoning by strychnine became more common by the late 19th century. Strychnine poisoning was initially problematic, as the solution had a bitter taste, and could only work if swallowed. Consequently, many jackals learned to regurgitate poisoned baits, thus inciting wildlife managers to use the less detectable crystal strychnine rather than liquid. The poison was usually placed within sheep carcasses or in balls of fat, with great care being taken to avoid leaving any human scent on them. Black-backed jackals were not a popular quarry in the 19th century, and are rarely mentioned in hunter's literature. By the turn of the century, jackals became increasingly popular quarry as they encroached upon human habitations after sheep farming and veld burning diminished their natural food sources. Although poisoning had been effective in the late 19th century, its success rate in eliminating jackals waned in the 20th century, as jackals seemed to be learning to distinguish poisoned foods.
The Tswana people often made hats and cloaks out of black-backed jackal skins. Between 1914 and 1917, 282,134 jackal pelts (nearly 50,000 a year) were produced in South Africa. Demand for pelts grew during the First World War, and were primarily sold in Cape Town and Port Elizabeth. Jackals in their winter fur were in great demand, though animals killed by poison were less valued, as their fur would shed.
| Biology and health sciences | Canines | Animals |
1194257 | https://en.wikipedia.org/wiki/Gomphothere | Gomphothere | Gomphotheres are an extinct group of proboscideans related to modern elephants. First appearing in Africa during the Oligocene, they dispersed into Eurasia and North America during the Miocene and arrived in South America during the Pleistocene as part of the Great American Interchange. Gomphotheres are a paraphyletic group ancestral to Elephantidae, which contains modern elephants, as well as Stegodontidae.
While most famous forms such as Gomphotherium had long lower jaws with tusks, the ancestral condition for the group, some later members developed shortened (brevirostrine) lower jaws with either vestigial or no lower tusks and outlasted the long-jawed gomphotheres. This change made them look very similar to modern elephants, an example of parallel evolution. During the Pliocene and Early Pleistocene, the diversity of gomphotheres declined, ultimately becoming extinct outside of the Americas. The last two genera, Cuvieronius ranging southern North America to western South America, and Notiomastodon ranging over most of South America, continued to exist until the end of the Pleistocene around 12,000 years ago, when they became extinct along with many other megafauna species following the arrival of humans.
The name "gomphothere" comes from Ancient Greek (), "peg, pin; wedge; joint" plus (), "beast".
Description
Gomphotheres differed from elephants in their tooth structure, particularly the chewing surfaces on the molar teeth. The teeth are considered to be bunodont, that is, having rounded rather than sharp cusps. They are thought to have chewed differently from modern elephants, using an oblique movement (combining back to front and side to side motion) over the teeth rather than the proal movement (a forwards stroke from the back to the front of the lower jaws) used by modern elephants and stegodontids, with this oblique movement being combined with vertical (orthal) motion that served to crush food. Like modern elephants and other members of Elephantimorpha, gomphotheres had horizontal tooth replacement, where teeth would progressively migrate towards the front of the jaws before they were taken place by more posterior teeth. Unlike modern elephants, many gomphotheres retained permanent premolar teeth though they were absent in some gomphothere genera.
Early gomphotheres had lower jaws with an elongate (longitostrine) mandibular symphysis (the front-most part of the lower jaw) and lower tusks, the primitive condition for members of Elephantimorpha. Later members developed shortened (brevirostrine) lower jaws and/or vestigial or no lower tusks, a convergent process that occurred multiple times among gomphotheres, as well as other members of Elephantimorpha. In Gomphotheriidae, these elongate mandibular symphysis tend to be narrow, while the lower tusks tend to be club shaped. While the musculature of the trunk of longirostrine gomphotheres was likely very similar to that of living elephants, the trunk was likely shorter (probably no longer than the tips of the lower tusks), and rested upon the elongate lower jaw, though the trunks of later brevirostrine gomphotheres were likely free hanging and comparable to those of living elephants in length. The lower tusks and long lower jaws of primitive gomphotheres were likely used for cutting vegetation, with a secondary contribution in acquiring food using the trunk, while brevirostrine gomphotheres relied primarily on their trunks to acquire food similar to modern elephants. The upper tusks of primitive longirostrine gomphotheres typically gently curve downwards, and generally do not exceed in length and in weight, though some later brevirostine gomphotheres developed considerably larger upper tusks. Upper tusks of brevirostine gomphotheres include those which are straight and upwardly curved.
Most gomphotheres reached sizes equivalent to those of those of the modern Asian elephant (Elephas maximus), though some gomphotheres reached sizes comparable to or somewhat exceeding African bush elephants (Loxodonta africana). The limb bones of gomphotheres like those of mammutids are generally more robust than elephantids, with the legs also tending to be proportionally shorter. Their bodies also tend to be more proportionally elongate than those of living elephants, resulting in gomphotheres being heavier than an elephant at the same shoulder height.
Taxonomy
"Gomphotheres" are assigned to their own family, Gomphotheriidae, but are widely agreed to be a paraphyletic group. The families Choerolophodontidae and Amebelodontidae (the latter of which includes "shovel tuskers" with flattened lower tusks like Platybelodon) are sometimes considered gomphotheres sensu lato, though some authors argue that Amebelodontidae should be sunk into Gomphotheriidae. Gomphotheres are divided into two informal groups, "trilophodont gomphotheres", and "tetralophodont gomphotheres". "Tetralophodont gomphotheres" are distinguished from "trilophodont gomphotheres" by the presence of four ridges on the fourth premolar and on the first and second molars, rather than the three present in trilophodont gomphotheres. Some authors choose to exclude "tetralophodont gomphotheres" from Gomphotheriidae, and instead assign them to the group Elephantoidea. "Tetralophodont gomphotheres" are thought to have evolved from "trilophodont gomphotheres", and are suggested to be ancestral to Elephantidae, the group which contains modern elephants, as well as Stegodontidae.
While the North American long jawed proboscideans Gnathabelodon, Eubelodon and Megabelodon been assigned to Gomphotheriidae in some studies other studies suggest that they should be assigned to Amebelodontidae (Eubelodon, Megabelodon) or Choerolophodontidae (Gnathabelodon).
Cladogram of Elephantimorpha after Li et al. 2023, showing a paraphyletic Gomphotheriidae.
Ecology
Gomphotheres are generally supposed to have been flexible feeders, with the various species having differing browsing, mixed feeding and grazing diets, with the dietary preference of individual species and populations being shaped by local factors such as climatic conditions and competition. Analysis of the tusks of a male Notiomastodon individual suggest that it underwent musth, similar to modern elephants. Notiomastiodon is also suggested to have lived in social family groups, like modern elephants.
Evolutionary history
Gomphotheres originated in Afro-Arabia during the mid-Oligocene, with remains from the Shumaysi Formation in Saudi Arabia dating to around 29-28 million years ago. Gomphotheres were uncommon in Afro-Arabia during the Oligocene. Gomphotheres arrived in Eurasia after the connection of Afro-Arabia and Eurasia during the Early Miocene around 19 million years ago, in what is termed the "Proboscidean Datum Event". Gomphotherium arrived in North America around 16 million years ago, and is suggested to be the ancestor of later New World gomphothere genera. "Trilophodont gomphotheres" dramatically declined during the Late Miocene, likely due to the increasing C4 grass-dominated habitats, while during the Late Miocene "tetralophodont gomphotheres" were abundant and widespread in Eurasia, where they represented the dominant group of proboscideans. All trilophodont gomphotheres, with the exception of the Asian Sinomastodon, became extinct in Eurasia by the beginning of the Pliocene, along with the global extinction of the "shovel tusker" amebelodontids. The last gomphotheres in Africa, represented by the "tetralophodont gomphothere" genus Anancus, became extinct around the end of the Pliocene and beginning of the Pleistocene. The New World gomphothere genera Notiomastodon and Cuvieronius dispersed into South America during the Pleistocene, around or after 2.5 million years ago as part of the Great American Biotic Interchange due to the formation of the Isthmus of Panama, becoming widespread across the continent. The last gomphothere native to Europe, Anancus arvernensis became extinct during the Early Pleistocene, around 1.6–2 million years ago Sinomastodon became extinct at the end of the Early Pleistocene, around 800,000 years ago. From the latter half of the Early Pleistocene onwards, gomphotheres were extirpated from most of North America, likely due to competition with mammoths and mastodons.
The extinction of gomphotheres in Afro-Eurasia has generally been supposed to be the result the expansion of Elephantidae and Stegodon. The morphology of elephantid molars being more efficient than gomphotheres in consuming grass, which became more abundant during the Pliocene and Pleistocene epochs. In southern North America, Central America and South America, gomphotheres did not become extinct until shortly after the arrival of humans to the Americas, approximately 12,000 years ago, as part of the Late Pleistocene megafauna extinctions of most large mammals across the Americas. Bones of the last gomphothere genera, Cuvieronius and Notiomastodon, dating to shortly before their extinction have been found associated with human artifacts, suggesting that hunting may have played a role in their extinction.
| Biology and health sciences | Proboscidea | Animals |
1194393 | https://en.wikipedia.org/wiki/Roadrunner | Roadrunner | The roadrunners (genus Geococcyx), also known as chaparral birds or chaparral cocks, are two species of fast-running ground cuckoos with long tails and crests. They are found in the southwestern and south-central United States, Mexico and Central America, usually in the desert. Although capable of flight, roadrunners generally run away from predators. On the ground, some have been measured at .
Species
The subfamily Neomorphinae, the New World ground cuckoos, includes 11 species of birds, while the genus Geococcyx has just two:
Morphology
The roadrunner generally ranges in size from from tail to beak. The average weight is about . The roadrunner is a large, slender, black-brown and white-streaked ground bird with a distinctive head crest. It has long legs, strong feet, and an oversized dark bill. The tail is broad with white tips on the three outer tail feathers. The bird has a bare patch of skin behind each eye; this patch is shaded blue anterior to red posterior. The lesser roadrunner is slightly smaller, not as streaky, and has a smaller bill. Both the lesser roadrunner and the greater roadrunner leave behind very distinct "X" track marks appearing as if they are travelling in both directions.
Roadrunners and other members of the cuckoo family have zygodactyl feet. The roadrunner can run at speeds of up to and generally prefer sprinting to flying, though it will fly to escape predators. During flight, the short, rounded wings reveal a white crescent in the primary feathers.
Vocalization
The roadrunner has a slow and descending dove-like "coo". It also makes a rapid, vocalized clattering sound with its beak.
Geographic range
Roadrunners inhabit the Southwestern United States, to parts of Missouri, Arkansas, and Louisiana, as well as Mexico and Central America. They live in arid lowland or mountainous shrubland or woodland. They are non-migratory, staying in their breeding area year-round. The greater roadrunner is not currently considered threatened in the US, but is habitat-limited.
Food and foraging habits
The roadrunner is an opportunistic omnivore. Its diet normally consists of insects (such as grasshoppers, crickets, caterpillars, and beetles), small reptiles (such as lizards and snakes, including rattlesnakes), rodents and other small mammals, spiders (including tarantulas), scorpions, centipedes, snails, small birds (and nestlings), eggs, and fruits and seeds like those from prickly pear cactuses and sumacs. The lesser roadrunner eats mainly insects. The roadrunner forages on the ground and, when hunting, usually runs after prey from under cover. It may leap to catch insects, and commonly batters certain prey against the ground. The roadrunner is one of the few animals that preys upon rattlesnakes; it is also the only real predator of tarantula hawk wasps.
Behavior and breeding
The roadrunner usually lives alone or in pairs. Breeding pairs are monogamous and mate for life, and pairs may hold a territory all year. During the courtship display, the male bows, alternately lifting and dropping his wings and spreading his tail. He parades in front of the female with his head high and his tail and wings drooped, and may bring an offering of food. The reproductive season is spring to mid-summer (depending on geographic location and species).
The roadrunner's nest is often composed of sticks, and may sometimes contain leaves, feathers, snakeskins, or dung. It is commonly placed above ground level in a low tree, bush, or cactus. Roadrunner eggs are generally white. The greater roadrunner generally lays 2–6 eggs per clutch, but the lesser roadrunner's clutches are typically smaller. Hatching is asynchronous. Both sexes incubate the nest (with males incubating the nest at night) and feed the hatchlings. For the first one to two weeks after the young hatch, one parent remains at the nest. The young leave the nest at two to three weeks old, foraging with parents for a few days after.
Thermoregulation
During the cold desert night, the roadrunner lowers its body temperature slightly, going into a slight torpor to conserve energy. To warm itself during the day, the roadrunner exposes dark patches of skin on its back to the sun.
Indigenous lore
The Hopi and other Pueblo tribes believed roadrunners were medicine birds, capable of warding off evil spirits. The X-shaped footprints of roadrunners were seen as sacred symbols, believed to confuse evil spirits by concealing the bird's direction of travel. Stylized roadrunner tracks have been found in the rock art of ancestral Southwestern tribes like the Mogollon cultures. Roadrunner feathers were used to decorate Pueblo cradleboards for spiritual protection. Among Mexican Indian and American Indian tribes, such as the Pima, seeing a roadrunner is considered good luck. While some Mexican tribes revered the roadrunner and never killed it, most used its meat as a folk remedy for illness or to boost stamina and strength.
Central American Indigenous peoples have various beliefs about the roadrunner. The Ch’orti’, known to call it t’unk’u’x or mu’, have taboos against harming the bird. The Ch'ol Maya believe roadrunners possess special powers, calling it ajkumtz’u’ due to its call, which is believed to induce tiredness in listeners.
The word for roadrunner in the O'odham language is , which is the name of a transit center in Tucson, Arizona. In the O'odham tradition, the roadrunner is also credited with bringing fire to the people.
In media
The roadrunner is the state bird of New Mexico. The roadrunner was made popular by the Warner Bros. cartoon characters Wile E. Coyote and the Road Runner, created in 1949, and the subject of a long-running series of theatrical cartoon shorts. In each episode, the cunning, insidious, and constantly hungry Wile E. Coyote repeatedly attempts to catch and subsequently eat the Road Runner, but is never successful. The cartoons led to a misconception that the call of the roadrunner is "meep, meep" because the roadrunner in this cartoon series made that sound instead of the aforementioned sound of a real roadrunner. In some shorts, the Road Runner makes a noise while sticking his tongue out at Wile E. Coyote, which resembles its actual call. The cartoons rely on a misconception that a roadrunner is much faster than a coyote. In fact, a coyote's fastest sprinting speed is , which is twice that of a roadrunner's at .
Citations
General references
| Biology and health sciences | Cuculiformes and relatives | Animals |
1195100 | https://en.wikipedia.org/wiki/Giant%20Gippsland%20earthworm | Giant Gippsland earthworm | The giant Gippsland earthworm (Megascolides australis) is one of Australia's 1,000 native earthworm species.
Description
These giant earthworms average long and in diameter and can reach in length; however, their body is able to expand and contract making them appear much larger. On average they weigh about . They have a dark purple head and a blue-grey body, and about 300 to 400 body segments.
Ecology
They live in the subsoil of blue, grey or red clay soils along stream banks and some south- or west-facing hills of their remaining habitat which is in Gippsland in Victoria, Australia. These worms live in deep burrow systems and require water in their environment to respire. They have relatively long life spans for invertebrates and can take 5 years to reach maturity. The reproductive period of the Giant Gippsland Earthworm mainly spans from September to December. They breed in the warmer months and produce egg capsules that are to in length which are laid in their burrows. When these worms hatch in 12 months they are around long at birth.
Unlike most earthworms which deposit castings on the surface, they spend almost all their time in burrows about in depth and deposit their castings there, and can generally only be flushed out by heavy rain. They eat organic matter as well as bacteria and fungi, which may have allowed them to better adapt to the change from a forest to pasture living area. They are usually very sluggish, but when they move rapidly through their burrows, it can cause an audible gurgling or sucking sound which allows them to be detected.
Threatened status
Gippsland earthworm colonies are small and isolated, and the species' low reproductive rates and slow maturation make those small populations vulnerable. Their natural habitats are grasslands, and while they can survive beneath pastures, cultivation, heavy cattle grazing and effluent run-off are adversarial to the species. The Gippsland earthworm requires moist loamy soil to thrive; dense tree planting negatively affects soil humidity, which in turn negatively affects the species' habitat. No successful breeding has yet been achieved in captivity.
Education
Until it closed in 2012 amid animal welfare concerns, Wildlife Wonderland Park near Bass, Victoria, was home to the Giant Earthworm Museum. Inside the worm-shaped museum, visitors could crawl through a magnified replica of a worm burrow and a simulated worm's stomach. Displays and educational material on the giant Gippsland earthworm and other natural history of Gippsland were also featured.
Tourism
Interest in the giant Gippsland earthworm has been exploited by the local tourist industry with an annual Karmai Festival in Korumburra. In the Boonwurrung language it is said to have been called karmai.
| Biology and health sciences | Lophotrochozoa | Animals |
1195294 | https://en.wikipedia.org/wiki/Degree%20%28angle%29 | Degree (angle) | A degree (in full, a degree of arc, arc degree, or arcdegree), usually denoted by ° (the degree symbol), is a measurement of a plane angle in which one full rotation is 360 degrees.
It is not an SI unit—the SI unit of angular measure is the radian—but it is mentioned in the SI brochure as an accepted unit. Because a full rotation equals 2 radians, one degree is equivalent to radians.
History
The original motivation for choosing the degree as a unit of rotations and angles is unknown. One theory states that it is related to the fact that 360 is approximately the number of days in a year. Ancient astronomers noticed that the sun, which follows through the ecliptic path over the course of the year, seems to advance in its path by approximately one degree each day. Some ancient calendars, such as the Persian calendar and the Babylonian calendar, used 360 days for a year. The use of a calendar with 360 days may be related to the use of sexagesimal numbers.
Another theory is that the Babylonians subdivided the circle using the angle of an equilateral triangle as the basic unit, and further subdivided the latter into 60 parts following their sexagesimal numeric system. The earliest trigonometry, used by the Babylonian astronomers and their Greek successors, was based on chords of a circle. A chord of length equal to the radius made a natural base quantity. One sixtieth of this, using their standard sexagesimal divisions, was a degree.
Aristarchus of Samos and Hipparchus seem to have been among the first Greek scientists to exploit Babylonian astronomical knowledge and techniques systematically. Timocharis, Aristarchus, Aristillus, Archimedes, and Hipparchus were the first Greeks known to divide the circle in 360 degrees of 60 arc minutes. Eratosthenes used a simpler sexagesimal system dividing a circle into 60 parts.
Another motivation for choosing the number 360 may have been that it is readily divisible: 360 has 24 divisors, making it one of only 7 numbers such that no number less than twice as much has more divisors . Furthermore, it is divisible by every number from 1 to 10 except 7. This property has many useful applications, such as dividing the world into 24 time zones, each of which is nominally 15° of longitude, to correlate with the established 24-hour day convention.
Finally, it may be the case that more than one of these factors has come into play. According to that theory, the number is approximately 365 because of the apparent movement of the sun against the celestial sphere, and that it was rounded to 360 for some of the mathematical reasons cited above.
Subdivisions
For many practical purposes, a degree is a small enough angle that whole degrees provide sufficient precision. When this is not the case, as in astronomy or for geographic coordinates (latitude and longitude), degree measurements may be written using decimal degrees (DD notation); for example, 40.1875°.
Alternatively, the traditional sexagesimal unit subdivisions can be used: one degree is divided into 60 minutes (of arc), and one minute into 60 seconds (of arc). Use of degrees-minutes-seconds is also called DMS notation. These subdivisions, also called the arcminute and arcsecond, are represented by a single prime (′) and double prime (″) respectively. For example, . Additional precision can be provided using decimal fractions of an arcsecond.
Maritime charts are marked in degrees and decimal minutes to facilitate measurement; 1 minute of latitude is 1 nautical mile. The example above would be given as 40° 11.25′ (commonly written as 11′25 or 11′.25).
The older system of thirds, fourths, etc., which continues the sexagesimal unit subdivision, was used by al-Kashi and other ancient astronomers, but is rarely used today. These subdivisions were denoted by writing the Roman numeral for the number of sixtieths in superscript: 1I for a "prime" (minute of arc), 1II for a second, 1III for a third, 1IV for a fourth, etc. Hence, the modern symbols for the minute and second of arc, and the word "second" also refer to this system.
SI prefixes can also be applied as in, e.g., millidegree, microdegree, etc.
Alternative units
In most mathematical work beyond practical geometry, angles are typically measured in radians rather than degrees. This is for a variety of reasons; for example, the trigonometric functions have simpler and more "natural" properties when their arguments are expressed in radians. These considerations outweigh the convenient divisibility of the number 360. One complete turn (360°) is equal to 2 radians, so 180° is equal to radians, or equivalently, the degree is a mathematical constant: 1° = .
One turn (corresponding to a cycle or revolution) is equal to 360°.
With the invention of the metric system, based on powers of ten, there was an attempt to replace degrees by decimal "degrees" in France and nearby countries, where the number in a right angle is equal to 100 gon with 400 gon in a full circle (1° = gon). This was called or grad. Due to confusion with the existing term grad(e) in some northern European countries (meaning a standard degree, of a turn), the new unit was called in German (whereas the "old" degree was referred to as ), likewise in Danish, Swedish and Norwegian (also gradian), and in Icelandic. To end the confusion, the name gon was later adopted for the new unit. Although this idea of metrification was abandoned by Napoleon, grades continued to be used in several fields and many scientific calculators support them. Decigrades () were used with French artillery sights in World War I.
An angular mil, which is most used in military applications, has at least three specific variants, ranging from to . It is approximately equal to one milliradian ( ). A mil measuring of a revolution originated in the imperial Russian army, where an equilateral chord was divided into tenths to give a circle of 600 units. This may be seen on a lining plane (an early device for aiming indirect fire artillery) dating from about 1900 in the St. Petersburg Museum of Artillery.
| Physical sciences | Angle | null |
1195462 | https://en.wikipedia.org/wiki/Environmental%20studies | Environmental studies | Environmental studies (EVS or EVST) is a multidisciplinary academic field which systematically studies human interaction with the environment. Environmental studies connects principles from the physical sciences, commerce/economics, the humanities, and social sciences to address complex contemporary environmental issues. It is a broad field of study that includes the natural environment, the built environment, and the relationship between them. The field encompasses study in basic principles of ecology and environmental science, as well as associated subjects such as ethics, geography, anthropology, public policy (environmental policy), education, political science (environmental politics), urban planning, law, economics, philosophy, sociology and social justice, planning, pollution control, and natural resource management. There are many Environmental Studies degree programs, including a Master's degree and a Bachelor's degree. Environmental Studies degree programs provide a wide range of skills and analytical tools needed to face the environmental issues of our world head on. Students in Environmental Studies gain the intellectual and methodological tools to understand and address the crucial environmental issues of our time and the impact of individuals, society, and the planet. Environmental education's main goal is to instill in all members of society a pro-environmental thinking and attitude. This will help to create environmental ethics and raise people's awareness of the importance of environmental protection and biodiversity.
History
The New York State College of Forestry at Syracuse University established a BS in environmental studies degree in the 1950s, awarding its first degree in 1956. Middlebury College established the major there in 1965.
The Environmental Studies Association of Canada (ESAC) was established in 1993 "to further research and teaching activities in areas related to environmental studies in Canada". ESAC was officially integrated in 1994, and the first convention for ESAC was held at the Learned Societies Conference in Calgary the same year. ESAC's magazine, A\J: Alternatives Journal was first published by Robert A. Paehlke on 4 July 1971.
In 2008, The Association for Environmental Studies and Sciences (AESS) was founded as the first professional association in the interdisciplinary field of environmental studies in the United States. The AESS is also the publisher for the Journal of Environmental Studies and Sciences (JESS), which aims to allow researchers in various disciplinarians related to environmental sciences to have base for researchers to use and publish new information related to environmental studies. In 2010, the National Council for Science and the Environment (NCSE) agreed to advise and support the association. In March 2011, The association's scholarly journal, the Journal of Environmental Studies and Sciences (JESS), commenced publication.
Environmental Studies in U.S. Universities
In the United States, many high school students are able to take environmental science as a college-level course. Over 500 colleges and universities in the United States offer environmental studies as a degree. The University of California, Berkeley has awarded the most degrees in environmental studies for U.S. universities, with 409 degrees awarded in 2019. The universities in the United States that have the highest percentage of degrees awarded is Antioch University-New England, where nearly 35% of degrees awarded in 2019 were in environmental studies.
Education
Worldwide, programs in environmental studies may be offered through colleges of liberal arts, life science, social science, or agriculture. Students of environmental studies use what they learn from the sciences, social sciences, and humanities to better understand environmental problems and potentially offer solutions to them. Students look at how we interact with the natural world and come up with ideas to prevent its destruction.
In the 1960s, the word "environment" became one of the most commonly used in educational discourse in the United Kingdom. Educationists were becoming increasingly worried about the influence of the environment on children as well as the school's usage of the environment. The attempt to define the field of environmental studies has resulted in a discussion over its role in the curriculum. The use of the environment is one of the teaching approaches used in today's schools to carry on the legacy of educational philosophy known as 'Progressive education' or 'New education' in the first part of the twentieth century. The primary goal of environmental studies is to assist children in understanding the processes that influence their surroundings so that they do not stay a passive, and often befuddled, observer of the environment, but rather become knowledgeable active mediators of it. The study of the environment can be considered to offer unique chances for the development and exercise of the general cognitive skills that Piaget's work has made educators aware of. Environmental studies are increasingly being viewed as a long-term preparation for higher environmental studies such as Sociology, Archaeology, or Historical Geography.
| Physical sciences | Earth science basics: General | Earth science |
1195577 | https://en.wikipedia.org/wiki/Lacrimal%20gland | Lacrimal gland | The lacrimal glands are paired exocrine glands, one for each eye, found in most terrestrial vertebrates and some marine mammals, that secrete the aqueous layer of the tear film. In humans, they are situated in the upper lateral region of each orbit, in the lacrimal fossa of the orbit formed by the frontal bone. Inflammation of the lacrimal glands is called dacryoadenitis. The lacrimal gland produces tears which are secreted by the lacrimal ducts, and flow over the ocular surface, and then into canals that connect to the lacrimal sac. From that sac, the tears drain through the lacrimal duct into the nose.
Anatomists divide the gland into two sections, a palpebral lobe, or portion, and an orbital lobe or portion. The smaller palpebral lobe lies close to the eye, along the inner surface of the eyelid; if the upper eyelid is everted, the palpebral portion can be seen.
The orbital lobe of the gland, contains fine interlobular ducts that connect the orbital lobe and the palpebral lobe. They unite to form three to five main secretory ducts, joining five to seven ducts in the palpebral portion before the secreted fluid may enter on the surface of the eye. Tears secreted collect in the fornix conjunctiva of the upper lid, and pass over the eye surface to the lacrimal puncta, small holes found at the inner corner of the eyelids. These pass the tears through the lacrimal canaliculi on to the lacrimal sac, in turn to the nasolacrimal duct, which dumps them out into the nose.
Lacrimal glands are also present in other mammals, including horses.
Structure
Histology
The lacrimal gland is a compound tubuloacinar gland, it is made up of many lobules separated by connective tissue, each lobule contains many acini. The acini composed of large serous cells which, produce a watery serous secretion, serous cells are filled with lightly stained secretory granules and surrounded by well-developed myoepithelial cells and a sparse, vascular stroma.
Each acinus consists of a grape-like mass of lacrimal gland cells with their apices pointed to a central lumen.
The central lumen of many of the units converge to form intralobular ducts, and then they unite to form interlobular ducts. The gland lacks striated ducts.
Blood supply
The lacrimal gland receives blood from the lacrimal artery, which is a branch of the ophthalmic artery. Blood from the gland drains to the superior ophthalmic vein.
Lymphatic drainage
No lymphatic vessels have been observed draining the lacrimal gland.
Nerve supply
The lacrimal gland is innervated by the lacrimal nerve, which is the smallest branch of the ophthalmic nerve, itself a branch of the trigeminal nerve (CN V). After the lacrimal nerve branches from the ophthalmic nerve it receives a communicating branch from the zygomatic nerve. This communicating branch carries postganglionic parasympathetic axons from the pterygopalatine ganglion. The lacrimal nerve passes anteriorly in the orbit and through the lacrimal gland providing parasympathetic and sympathetic innervation to it.
Parasympathetic innervation
The parasympathetic innervation to the lacrimal gland is a complex pathway which traverses through numerous structures in the head. Ultimately this two-neuron pathway involving both a preganglionic and postganglionic parasympathetic neuron increases the secretion of lacrimal fluid from the lacrimal gland. The preganglionic parasympathetic neurons are located in the superior salivatory nucleus. They project axons which exit the brainstem as part of the facial nerve (CN VII). Within the facial canal at the geniculate ganglion the axons branch from the facial nerve forming the greater petrosal nerve. This nerve exits the facial canal through the hiatus for the greater petrosal nerve in the petrous part of the temporal bone. It emerges to the middle cranial fossa and travels anteromedially to enter the foramen lacerum. Within the foramen lacerum it joins to the deep petrosal nerve to form the nerve of the pterygoid canal and then passes through this canal. It emerges in the pterygopalatine fossa and enters the pterygopalatine ganglion where the preganglionic parasympathetic axons synapse with the postganglionic parasympathetic neurons. The postganglionic neurons then send axons which travel with the zygomatic nerve to enter the inferior orbital fissure. As the zygomatic nerve travels anteriorly in the orbit it sends a communicating branch to the lacrimal nerve which carries the postganglionic parasympathetic axons. The lacrimal nerve completes this long pathway by travelling through the lacrimal gland and sending branches to which it provides parasympathetic innervation to increase the secretion of lacrimal fluid.
Sympathetic innervation
Sympathetic innervation to the lacrimal gland is of less physiologic importance than the parasympathetic innervation, however there are noradrenergic axons found within the lacrimal gland. Their cell bodies are located in the superior cervical ganglion.
Clinical significance
In contrast to the normal moisture of the eyes or even crying, there can be persistent dryness, scratching, itchiness and burning in the eyes, which are signs of dry eye syndrome (DES) or keratoconjunctivitis sicca (KCS). With this syndrome, the lacrimal glands produce less lacrimal fluid, which mainly occurs with aging or certain medications. The Schirmer test, conducted by placing a thin strip of filter paper at the edge of the eye, can be used to determine the level of dryness of the eye. Many medications or diseases that cause dry eye syndrome can also cause hyposalivation with xerostomia. Treatment varies according to aetiology and includes avoidance of exacerbating factors, tear stimulation and supplementation, increasing tear retention, eyelid cleansing, and treatment of eye inflammation.
In addition, the following can be associated with lacrimal gland pathology:
Dacryoadenitis
Sjögren's syndrome
Additional images
| Biology and health sciences | Exocrine system | Biology |
1195815 | https://en.wikipedia.org/wiki/Homo%20antecessor | Homo antecessor | Homo antecessor (Latin "pioneer man") is an extinct species of archaic human recorded in the Spanish Sierra de Atapuerca, a productive archaeological site, from 1.2 to 0.8 million years ago during the Early Pleistocene. Populations of this species may have been present elsewhere in Western Europe, and were among the first to settle that region of the world, hence the name. The first fossils were found in the Gran Dolina cave in 1994, and the species was formally described in 1997 as the last common ancestor of modern humans and Neanderthals, supplanting the more conventional H. heidelbergensis in this position. H. antecessor has since been reinterpreted as an offshoot from the modern human line, although probably one branching off just before the modern human/Neanderthal split.
Despite being so ancient, the face is unexpectedly similar to that of modern humans rather than other archaic humans—namely in its overall flatness as well as the curving of the cheekbone as it merges into the upper jaw—although these elements are known only from a juvenile specimen. Brain volume could have been or more, but no intact braincase has been discovered. This is within the range of variation for modern humans. Stature estimates range from . H. antecessor may have been broad-chested and rather heavy, much like Neanderthals, although the limbs were proportionally long, a trait more frequent in tropical populations. The kneecaps are thin and have poorly developed tendon attachments. The feet indicate H. antecessor walked differently than modern humans.
H. antecessor was predominantly manufacturing simple pebble and flake stone tools out of quartz and chert, although they used a variety of materials. This industry has some similarities with the more complex Acheulean, an industry which is characteristic of contemporary African and later European sites. Groups may have been dispatching hunting parties, which mainly targeted deer in their savannah and mixed woodland environment. Many of the H. antecessor specimens were cannibalised, perhaps as a cultural practice. There is no evidence they were using fire, and they similarly only inhabited inland Iberia during warm periods, presumably retreating to the coast otherwise.
Taxonomy
Research history
The Sierra de Atapuerca in northern Spain had long been known to be abundant in fossil remains. The Gran Dolina ("great sinkhole") was first explored for fossils by archaeologist in a short field trip to the region in 1966, where he recovered a few animal fossils and stone tools. He lacked the resources and manpower to continue any further. In 1976, Spanish palaeontologist Trinidad Torres investigated the Gran Dolina for bear fossils (he recovered Ursus remains), but was advised by the Edelweiss Speleological Club to continue at the nearby Sima de los Huesos ("bone pit"). Here, in addition to a wealth of bear fossils, he also recovered archaic human fossils, which prompted a massive exploration of the Sierra de Atapuerca, at first headed by Spanish palaeontologist Emiliano Aguirre but quickly taken over by José María Bermúdez de Castro, Eudald Carbonell, and Juan Luis Arsuaga. They restarted excavation of the Gran Dolina in 1992, and found archaic human remains two years later; in 1997, they formally described these as a new species, Homo antecessor. The holotype is specimen ATD6-5, a right mandibular fragment retaining the molars and recovered with some isolated teeth. In their original description Castro and colleagues posited that the species was the first human to colonise Europe, hence the name antecessor (Latin for "explorer", "pioneer", or "early settler").
The of Pleistocene sediments at the Gran Dolina are divided into eleven units, TD1 to TD11 ("trinchera dolina" or "sinkhole trench"). H. antecessor was recovered from TD6, which has consequently become the most well-researched unit of the site. In the first field seasons from 1994–1995, the dig team excavated a small test pit (to see if the unit warranted further investigation) in the southeast section measuring . Human fossils were discovered first by Aurora Martín Nájera; the layer they were found in is nicknamed the "Aurora Stratum" after her. A triangular section was excavated in the central section starting in the early 2000s. Human fossils were also found in the northern section. In sum, about 170 H. antecessor specimens were recovered. The best preserved are ATD6-15 and ATD6-69 (possibly belonging to the same individual) that most clearly elucidate facial anatomy. Subsequent field seasons have yielded about sixty more specimens. The discovered parts of the H. antecessor skeleton are: elements of the face, clavicle, forearm, digits, knees, and a few vertebrae and ribs.
In 2007, a mandibular fragment with some teeth, ATE9-1, provisionally assigned to H. antecessor by Carbonell, was recovered from the nearby Sima del Elefante ("elephant pit") in unit TE9 ("trinchera elefante"), belonging to a 20– to 25-year-old individual. The site additionally yielded stone flakes and evidence of butchery. In 2011, after providing a much more in depth analysis of the Sima del Elefante material, Castro and colleagues were unsure of the species classification, opting to leave it at Homo sp. (making no opinion on species designation) pending further discoveries.
The stone tool assemblage at the Gran Dolina is broadly similar to several other contemporary ones across Western Europe, which may represent the work of the same species, although this is unconfirmable because many of these sites have not produced human fossils. In 2014 fifty footprints dating to between 1.2 million and 800,000 years ago were discovered in Happisburgh, England, which could potentially be attributed to an H. antecessor group given it is the only human species identified during that time in Western Europe.
Classification
The face of H. antecessor is unexpectedly similar to that of modern humans compared to other archaic groups, so in their original description, Castro and colleagues classified it as the last common ancestor of modern humans and Neanderthals, supplanting H. heidelbergensis in this capacity. The facial anatomy came under close scrutiny in subsequent years.
In 2001 French palaeoanthropologist Jean-Jacques Hublin postulated that the Gran Dolina remains and the contemporaneous Tighennif remains from Algeria (usually classified as Homo ergaster [=? Homo erectus], originally "Atlantanthropus mauritanicus") represent the same population, because fourteen of the fifteen dental features Castro and colleagues listed for H. antecessor have also been identified in the Middle Pleistocene of North Africa; this would mean H. antecessor is a junior synonym of "Homo mauritanicus", i. e., the Gran Dolina and Tighennif humans should be classified into the latter. In 2007 Castro and colleagues studied the fossils, and found the Tighennif remains to be much larger than H. antecessor and dentally similar to other African populations. Nonetheless, they still recommended reviving mauritanicus to house all Early Pleistocene North African specimens as "H. ergaster mauritanicus".
In 2007 primatologist Esteban Sarmiento and colleagues questioned the legitimacy of H. antecessor as a separate species because much of the skull anatomy is unknown; H. heidelbergensis is known from roughly the same time and region; and because the type specimen was a child (the supposedly characteristic features could have disappeared with maturity.) Such restructuring of the face, they argued, can also be caused by regional climatic adaptation rather than speciation. In 2009 American palaeoanthropologist Richard Klein stated he was skeptical that H. antecessor was ancestral to H. heidelbergensis, interpreting H. antecessor as "an offshoot of H. ergaster [from Africa] that disappeared after a failed attempt to colonize southern Europe". Similarly, in 2012, British physical anthropologist Chris Stringer considered H. antecessor and H. heidelbergensis to be two different lineages rather than them having an ancestor/descendant relationship. In 2013, anthropologist Sarah Freidline and colleagues suggested the modern humanlike face evolved independently several times among Homo. In 2017 Castro and colleagues conceded that H. antecessor may or may not be a modern human ancestor, although if it was not then it probably split quite shortly before the modern human/Neanderthal split. In 2020 Dutch molecular palaeoanthropologist Frido Welker and colleagues concluded H. antecessor is not a modern human ancestor by analysing ancient proteins collected from the tooth ATD6-92.
Age and taphonomy
The 2003 to 2007 excavations revealed a much more intricate stratigraphy than previously thought, and TD6 was divided into three subunits spanning thirteen layers and nine sedimentary facies (bodies of rock distinctive from adjacent bodies). Human presence is recorded in subunits 1and 2, and in facies A, D1, and F. Randomly orientated scattered bones were deposited in Facies D1 of layer TD6.2.2 (TD6 subunit 2, layer 2) and Facies F of layers TD6.2.2 and TD6.2.3, but in Facies D they seem to have been conspicuously clumped into the northwest area. This might indicate they were dragged into the cave via a debris flow. As for Facies F, which contains the most human remains, they may have been deposited by a low energy debris flow (consistent with floodplain behaviour) from the main entrance to the northwest, as well as a stronger debris flow from another entrance to the south. Fluvially deposited fossils (dragged in by a stream of water) were also recovered from Facies A in layers TD6.2.2, TD6.2.1 and TD6.1.2, indicated by limestone gravel within the size range of the remains. Thus, H. antecessor may not have inhabited the cave, although was at least active nearby. Only 5.6% of the fossils bear any evidence of weathering from open air, roots, and soil, which could mean they were deposited deep into the cave relatively soon after death.
Human occupation seems to have occurred in waves corresponding to timespans featuring a warm, humid savannah habitat (although riversides likely supported woodlands). These conditions were only present during transitions from cool glacial to warm interglacial periods, after the climate warmed and before the forests could expand to dominate the landscape. The dating attempts of H. antecessor remains are:
In 1999 two ungulate teeth from TD6 were dated using uranium–thorium dating to 794 to 668 thousand years ago, and further constrained palaeomagnetically to over 780,000 years ago.
In 2008 TE9 of the Sima del Elefante was constrained to 1.2–1.1 million years ago using palaeomagnetism and cosmogenic dating.
In 2013 TD6 was dated to about 930 to 780 thousand years ago using palaeomagnetism, in addition to uranium–thorium and electron spin resonance dating (ESR) on more teeth.
In 2018 ESR dating of the H. antecessor specimen ATD6-92 resulted in an age of 949 to 624 thousand years ago, further constrained palaeomagnetically to before 772,000 years ago.
In 2022 ESR and single grain thermally transferred optically stimulated luminescence (SG TT-OSL) dated the opening of the Gran Dolina to roughly 900,000 years ago, and the sediments from TD4 to TD6 to between 890,000 to 770,000 years ago. These three units were probably deposited within a period of less than 100,000 years.
Until 2013 with the discovery of the 1.4-million-year-old infant tooth from Barranco León, Orce, Spain, these were the oldest human fossils known from Europe, although human activity on the continent stretches back as early as 1.6 million years ago in Eastern Europe and Spain indicated by stone tools.
Anatomy
Skull
The facial anatomy of H. antecessor is predominantly known from the 10–11.5-year-old H. antecessor child ATD6-69, as the few other facial specimens are fragmentary. ATD6-69 is strikingly similar to modern humans (as well as East Asian Middle Pleistocene archaic humans) as opposed to West Eurasian or African Middle Pleistocene archaic humans including Neanderthals. The most notable traits are a completely flat face and a curved zygomaticoalveolar crest (the bar of bone connecting the cheek to the part of the maxilla that holds the teeth). In 2013 anthropologist Sarah Freidline and colleagues statistically determined that these features would not disappear with maturity. H. antecessor suggests the modern human face evolved and disappeared multiple times in the past, which is not unlikely as facial anatomy is strongly influenced by diet and thus the environment. The nasal bones are like those of modern humans. The mandible (lower jaw) is quite gracile unlike most other archaic humans. It exhibits several archaic features, but the shape of the mandibular notch is modern humanlike, and the alveolar part (adjacent to the teeth) is completely vertical as in modern humans. Like many Neanderthals, the medial pterygoid tubercle is large. Unlike most Neanderthals, there is no retromolar space (a large gap between the last molar and the end of the body of the mandible).
The upper incisors are shovel-shaped (the lingual, or tongue, side is distinctly concave), a feature characteristic of other Eurasian human populations, including modern. The canines bear the cingulum (a protuberance toward the base) and the essential ridge (toward the midline) like more derived species, but retain the cuspules (small bumps) near the tip and bordering incisor like more archaic species. Compared to later hominins, the lower canines of H. antecessor bear fewer perikymata. The upper premolar crowns are rather derived, being nearly symmetrical and bearing a lingual cusp (on the tongue side), and a cingulum and longitudinal grooves on the cheekward side. The upper molars feature several traits typically seen in Neanderthals. The mandibular teeth, on the other hand, are quite archaic. The P3 (the first lower premolar) has a strongly asymmetrical crown and complex tooth root system. P3 is smaller than P4 like in more derived species, but like other early Homo, M1 (the first lower molar) is smaller than M2 and the cusps of the molar crowns make a Y shape. The distribution of enamel is Neanderthal-like, with thicker layers at the periphery than at the cusps. Based on two canine teeth (ATD6- 69 and ATD6-13), the thickness of the enamel and the proportion of the tooth covered by the gums vary to the same degree as for males and females of modern humans and many other apes, so this may be due to sexual dimorphism, with females having smaller teeth, relatively thicker enamel, and smaller proportion of gum coverage.
The parietal bones (each being one side of the back part of the top of the skull) are flattened, and conjoin at a peak at the midline. This "tent-like" profile is also exhibited in more archaic African H. ergaster and Asian H. erectus. Like H. ergaster, the temporal styloid process just below the ear is fused to the base of the skull. The brow ridges are prominent. The upper margin of the squamous part of temporal bones (on the side of the skull) is convex, like in more derived species. The brain volume of ATD6-15, perhaps belonging to an 11-year-old, may have been or more based on frontal bone measurements. This is within the range of variation for modern humans.
Torso
The notably large adult clavicle specimen ATD6-50, assumed male based on absolute size, was estimated to have stood , mean of , based on the correlation among modern Indian people between clavicle length and stature. An adult radius (a forearm bone), ATD6-43, which could be male based on absolute size or female based on gracility, was estimated to have belonged to a tall individual based on the average of equations among several modern populations relating radial length to stature. Based on metatarsal (foot bone) length, a male is estimated to have stood and a female . These are all rather similar values. For comparison, Western European Neanderthal estimates average , and early European modern humans . The ankle joint is adapted for handling high stress, which may indicate a heavy, robust body plan, much like Neanderthals. Based on the relationship between human footprint length and body size, twelve Happisburgh prints that are preserved well enough to measure are consistent with individuals ranging from in stature, which may mean some of the trackmakers were children. By this logic, the three biggest footprints—equating to statures of , , and —ranged from in weight. Stature estimates for H. antecessor, H. heidelbergensis, and Neanderthals are roughly consistent with each other.
Two atlases (the first neck vertebra) are known, which is exceptional as this bone is rarely discovered for archaic humans. They are indistinguishable from those of modern humans. For the axis (the second neck vertebra), the angle of the spinous process (jutting out from the vertebra) is about 19 degrees, comparable with Neanderthals and modern humans, diverging from H. ergaster with a low angle of about 8°. The vertebral foramen (that houses the spinal cord) is on the narrow side compared to modern humans. The spine as a whole otherwise aligns with modern humans.
There is one known (and incomplete) clavicle, ATD6-50, which is thick compared to those of modern humans. This may indicate H. antecessor had long and flattish (platycleidic) clavicles like other archaic humans. This would point to a broad chest. The proximal curvature (twisting of the bone on the side nearest the neck) in front view is on par with that of Neanderthals, but the distal curvature (on the shoulder side) is much more pronounced. The sternum is narrow. The acromion (that extends over the shoulder joint) is small compared to those of modern humans. The shoulder blade is similar to all Homo with a typical human body plan, indicating H. antecessor was not as skilled a climber as non-human apes or pre-erectus species, but was capable of efficiently launching projectiles such as stones or spears.
Limbs
The incomplete radius, ATD6-43, was estimated to have measured . It is oddly long and straight for someone from so far north, reminiscent of the proportions seen in early modern humans and many people from tropical populations. This could be explained as retention of the ancestral long limbed tropical form, as opposed to Neanderthals who evolved shorter limbs. This could also indicate a high brachial index (radial to humeral length ratio). Compared to more recent human species, the cross section of the radial shaft is rather round and gracile throughout its length. Like archaic humans, the radial neck (near the elbow) is long, giving more leverage to the biceps brachii. Like modern humans and H. heidelbergensis, but unlike Neanderthals and more archaic hominins, the radial tuberosity (a bony knob jutting out just below the radial neck) is anteriorly placed (toward the front side when the arm is facing out).
Like those of other archaic humans, the femur features a developed trochanteric fossa and posterior crest. These traits are highly variable among modern human populations. The two known kneecaps, ATD6-22 and ATD6-56, are subrectangular in shape as opposed to the more common subtriangular, although rather narrow like those of modern humans. They are quite small and thin, falling at the lower end for modern human females. The apex of the kneecap (the area that does not join to another bone) is not well developed, leaving little attachment for the patellar tendon. The medial (toward the midline) facet and lateral (toward the sides) facet for the knee joint are roughly the same size as each other in ATD6-56 and the medial is larger in ATD6-22, whereas the lateral is commonly larger in modern humans. The lateral facet encroaches onto a straight flat area as opposed to being limited to a defined vastus notch, an infrequent condition among any human species.
The phalanges and metatarsals of the foot are comparable to those of later humans, but the big toe bone is rather robust, which could be related to how H. antecessor pushed off the ground. The ankle bone (talus) is exceptionally long and high as well as the facet where it connects with the leg (the trochlea), which may be related to how H. antecessor walked. The long trochlea caused a short neck of the talus, which bridges the head of the talus connecting to the toes, and the body of the talus connecting to the leg. This somewhat converges with the condition exhibited in Neanderthals, which is generally explained as a response to a heavy and robust body, to alleviate the consequently higher stress to the articular cartilage in the ankle joint. This would also have permitted greater flexion.
Growth rate
In 2010 Castro and colleagues estimated that ATD6-112, represented by a permanent upper and lower first molar, died between 5.3 and 6.6 years of age based on the tooth formation rates in chimpanzees (lower estimate) and modern humans (upper). The molars are hardly worn at all, which means the individual died soon after the tooth erupted, and that first molar eruption occurred at roughly this age. The age is within the range of variation of modern humans, and this developmental landmark can debatably be correlated with life history. If the relation is true, H. antecessor had a prolonged childhood, a characteristic of modern humans in which significant cognitive development takes place.
Pathology
The partial face ATD6-69 has an ectopic M3 (upper left third molar), where it erupted improperly, and this caused the impaction of M2, where it was blocked from erupting at all. Although impaction of M3 is rather common in modern humans, as high as fifty percent in some populations, impaction of M2 is rare, as little as 0.08 to 2.3%. Impaction can lead to secondary lesions, such as dental cavities, root resorption, keratocysts and dentigerous cysts.
The mandible ATE9-1 exhibits severe dental attrition and abrasion of the tooth crowns and bone resorption at the root, so much so that the root canals (the sensitive interior) of the canines are exposed. The trauma is consistent with gum disease due to overloading the teeth, such as by using the mouth as a third hand to carry around items. A similar condition was also reported for the later Sima de los Huesos remains also at the Sierra de Atapuerca site.
The left knee bone ATD6-56 has a height x breadth osteophyte (bone spur) on the inferior (lower) margin. Osteophytes normally form as a response to stress due to osteoarthritis, which can result from old age or improper loading of the joint as a consequence of bone misalignment or ligament laxity. In the case of ATD6-56, improper loading was likely the causal factor. Frequent squatting and kneeling can lead to this condition, but if the right knee bone ATD6-22 (that has no such trauma) belongs to the same individual, then this is unlikely to be the reason. If so, the lesion was caused by a local trauma, such as strain on the soft tissue around the joint due to high intensity activity, or a fracture of the left femur and/or tibia (that is unconfirmable since neither bone is associated with this individual).
The right fourth metatarsal ATD6-124 has a length x width lesion on the medial (toward the midline of the bone) side consistent with a march fracture. This condition is most often encountered by soldiers, long distance runners, and potentially flatfooted people whose foot bones failed under repeated, high intensity activity. Later Neanderthals would evolve a much more robust lower skeleton possibly to withstand such taxing movement across uneven terrain. Although only one other example of the condition has been identified (at Sima de los Huesos) among archaic humans, march fractures were probably a common injury for them given that the healed fracture leaves no visible mark, as well as their presumed high intensity lifestyle.
Culture
Technology
H. antecessor was producing simple stone tools at Gran Dolina. This industry is found elsewhere in Early Pleistocene Spain—notably in Barranc de la Boella and the nearby Galería—distinguished by the preparation and sharpening of cores before flaking, the presence of (crude) bifaces, and some degree of standardisation of tool types. This bears some resemblance to the much more complex Acheulean industry, characteristic of African and later European sites. The earliest evidence of typical Acheulean toolsets comes from Africa 1.75 million years ago, but the typical Acheulean toolset pops up in Western Europe nearly a million years later. It is debated if these early European sites evolved into the European Acheulean industry independently from African counterparts, or if the Acheulean was brought up from Africa and diffused across Europe. In 2020 French anthropologist Marie-Hélène Moncel argued the appearance of typical Achuelean bifaces 700,000 years ago in Europe was too sudden to be the result of completely independent evolution from local technologies, so there must have been influence from Africa. Wearing on the TD6 stone tools is consistent with repeated abrasion against flesh, so they were probably used as butchering implements.
TD6.3
In the lower part of TD6.3 (TD6 subunit 3), 84 stone tools were recovered, predominantly small, unmodified quartzite pebbles with percussive damage—probably inflicted from pounding items such as bone—as opposed to manufacturing more specialised implements.
Although 41% of the section's assemblage consists of flakes, they are rather crude and large—averaging —either resulting from rudimentary knapping (stoneworking) skills or difficulty working such poor quality materials. They made use of the unipolar longitudinal method, flaking off only one side of a core, probably to compensate for the lack of preplanning, opting to knap irregularly shaped and thus poorer quality pebbles.
TD6.2
Most of the stone tools resided in the lower (older) half of TD6.2, with 831 stone tools. The knappers made use of a much more diverse array of materials (although most commonly chert), which indicates they were moving farther out in search of better raw materials. The Sierra de Atapuerca features an abundance and diversity of mineral outcroppings suitable for stone tool manufacturing, in addition to chert and quartz namely quartzite, sandstone, and limestone, which could all be collected within only of the Gran Dolina.
They produced far fewer pebbles and spent more time knapping off flakes, but they were not particularly economic with their materials, and about half of the cores could have produced more flakes. They additionally modified irregular blanks into more workable shapes before flaking off pieces. This preplanning allowed them to use other techniques: the centripetal method (flaking off only the edges of the core) and the bipolar method (laying the core on an anvil and slamming it with a hammerstone). There are 62 flakes measuring below in height, and 28 above . There are three conspicuously higher quality flakes, thinner and longer than the others, which may have been produced by the same person. There are also retouched tools: notches, spines, denticulates, points, scrapers, and a single chopper. These small retouched tools are rare in the European Early Pleistocene.
TD6.1
TD6.1 yielded 124 stone tools, but they are badly preserved as the area was also used by hyenas as a latrine, the urine corroding the area. The layer lacks pebbles and cores, and 44 of the stone tools are indeterminate. Flakes are much smaller with an average of , with ten measuring below , and only three exceeding .
They seem to have been using the same methods as the people who manufactured the TD6.2 tools. They were only retouching larger flakes, the fourteen such tools averaging : one marginally retouched flake, one notch, three spines, seven denticulate sidescrapers, and one denticulate point.
Fire and palaeoclimate
Only a few charcoal particles have been collected from TD6, which probably originated from a fire well outside the cave. There is no evidence of any fire use or burnt bones (cooking) in the occupation sequences of the Gran Dolina. In other parts of the world, reliable evidence of fire usage does not surface in the archaeological record until roughly 400,000 years ago. In 2016, small mammal bones burned in fires exceeding were identified from 780- to 980-thousand-year-old deposits at in southern Spain, which potentially could have come from a human source as such a high temperature is usually (though not always) recorded in campfires as opposed to natural bushfires.
Instead of using fire, these early Europeans probably physiologically withstood the cold, such as by eating a high protein diet to support a heightened metabolism. Despite glacial cycles, the climate was probably similar or a few degrees warmer compared to that of today's, with the coldest average temperature reaching sometime in December and January, and the hottest in July and August . Freezing temperatures could have been reached from November to March, but the presence of olive and oak suggests subfreezing was an infrequent occurrence. TE9 similarly indicates a generally warm climate. The Happisburgh footprints were lain in estuarine mudflats with open forests dominated by pine, spruce, birch, and in wetter areas alder, with patches of heath and grasslands; the vegetation is consistent with the cooler beginning or end of an interglacial.
H. antecessor probably migrated from the Mediterranean shore into inland Iberia when colder glacial periods were transitioning to warmer interglacials, and warm grasslands dominated, vacating the region at any other time. They may have followed water bodies while migrating, in the case of Sierra de Atapuerca, most likely the Ebro River.
Food
The fossils of sixteen animal species were recovered randomly mixed with the H. antecessor material at the Gran Dolina, including the extinct bush-antlered deer, the extinct species of fallow deer Dama vallonetensi, the extinct subspecies of red deer Cervus elaphus acoronatus, the extinct bison Bison voigstedtensi, the extinct rhino Stephanorhinus etruscus, the extinct horse Equus stenonis, the extinct fox Vulpes praeglacialis, the extinct bear Ursus dolinensis, the extinct wolf Canis mosbachensis, the spotted hyena, the wild boar, and undetermined species of mammoth, monkey, and lynx. Some specimens of the former eight species and the monkey exhibit cut marks consistent with butchery, with about 13% of all Gran Dolina remains bearing some evidence of human modification. Deer are the most commonly butchered animal, with 106 specimens. The inhabitants seem to have carried carcasses back whole when feasible, and only the limbs and skulls of larger quarries. This indicates the Gran Dolina H. antecessor were dispatching hunting parties who killed and hauled back prey to share with the entire group rather than each individual foraging entirely for themselves, which evinces social cooperation and division of labour. Less than 5% of all the remains retain animal carnivore damage, in two instances toothmarks overlapping cutmarks from an unidentified animal, which could indicate animals were sometimes scavenging H. antecessor leftovers.
The Sima del Elefante site records the fallow deer, the bush-antlered deer, rhinos, E. stenonis, C. mosbachensis, U. dolinensis, the extinct big cat Panthera gombaszoegensis, the extinct lynx Lynx issiodorensis, the extinct fox Vulpes alopecoides, several rats, shrews, and rabbits, and undetermined species of macaques, boar, bison, and beaver. The large mammals are most commonly represented by long bones, a few of which are cracked open, presumably to access the bone marrow. Some others bear evidence of percussion and defleshing. They were also butchering Hermann's tortoise, an easily obtainable source of meat considering how slowly tortoises move.
The cool and humid montane environment encouraged the growth of olive, mastic, beech, hazelnut, and chestnut trees, which H. antecessor may have used as food sources, although they become more common in TD7 and TD8 as the interglacial progresses and the environment becomes wetter. In the H. antecessor unit TD6, pollen predominantly derives from juniper and oak. Trees probably grew along rivers and streams, while the rest of the hills and ridges were dominated by grasses. The TD6 individuals also seem to have been consuming hackberries, which in historical times have been used for their medicinal properties more than satiating hunger because these berries provide very little flesh.
There is no evidence H. antecessor could wield fire and cook, and similarly the wearing on the molars indicates the more frequent consumption of grittier and more mechanically challenging foods than later European species, such as raw rather than cooked meat and underground storage organs.
Cannibalism
Eighty young adult and child H. antecessor specimens from the Gran Dolina exhibit cut marks and fracturing indicative of cannibalism, and H. antecessor is the second-most common species bearing evidence of butchering. Human bodies were efficiently utilised, and may be the reason why most bones are smashed or otherwise badly damaged. There are no complete skulls, elements from the face and back of the skull are usually percussed, and the muscle attachments on the face and the base of the skull were cut off. The intense modification of the face was probably to access the brain. The crown of the head was probably struck, resulting in the impact scars on the teeth at the gum line. Several skull fragments exhibit peeling.
The ribs also bear cut marks along the muscle attachments consistent with defleshing, and ATD6-39 has cuts along the length of the rib, which may be related to disembowelment. The nape muscles were sliced off, and the head and neck were probably detached from the body. The vertebrae were often cut, peeled, and percussed. The muscles on all of the clavicles were sawed off to disconnect the shoulder. One radius, ATD6-43, was cut up and peeled. The femur was shattered, probably to extract the bone marrow. The hands and feet variably exhibit percussion, cutting, or peeling, likely a result of dismemberment.
In sum, mainly the meatier areas were prepared, and the rest discarded. This suggests they were butchering humans for nutritional purposes, but the face generally exhibits significantly more cutmarks than the faces of animals. When this is seen in prehistoric modern human specimens, it is typically interpreted as evidence of exocannibalism, a form of ritual cannibalism where one eats someone from beyond their social group, such as an enemy from a neighbouring tribe. But, when overviewing the evidence of H. antecessor cannibalism in 1999, Spanish palaeontologist Yolanda Fernandez-Jalvo and colleagues instead ascribed the relative abundance of facial cut marks in the H. antecessor sample to the strongly contrasting structure of the muscle attachments between humans and typical animal prey items (that is, defleshing the human face simply required more cuts, or the butcherers were less familiar with defleshing humans).
Nonetheless, the assemblage had a lack of older individuals, and was composed entirely of young adults and juveniles. In 2010 Carbonell hypothesised that they were practising exocannibalism and hunting down neighbouring tribesmen. While not rejecting this hypothesis, Spanish palaeoanthropologist Jesús Rodríguez and colleagues suggested as an alternative explanation that the eaten people may have been fellow tribesmen who had died of unrelated reasons (such as natural causes, war, or accidents), eaten in funerary rites or possibly simply to avoid wasting food. They consider this explanation as better fitting the demographic distribution of the eaten due to the high youth mortality rates in hunter-gatherer groups, while also granting that the high number of young individuals among the eaten may have been due to a "low-risk hunting strategy" (juveniles of foreign groups were easier to catch and kill) or a "deliberate cultural strategy aimed to defend the territory and eliminate competitors" by targeting their offspring.
| Biology and health sciences | Homo | Biology |
15434651 | https://en.wikipedia.org/wiki/Glutamate%20flavoring | Glutamate flavoring | Glutamate flavoring is the generic name for flavor-enhancing compounds based on glutamic acid and its salts (glutamates). These compounds provide an umami (savory) taste to food.
Glutamic acid and glutamates are natural constituents of many fermented or aged foods, including soy sauce, fermented bean paste, and cheese. They can also be found in hydrolyzed proteins such as yeast extract. The sodium salt of glutamic acid, monosodium glutamate (MSG), is manufactured on a large scale and widely used in the food industry.
Glutamic acid versus glutamates
When glutamic acid or any of its salts are dissolved in water, they form a solution of separate negative ions, called glutamates, and positive ions like or . The result is actually a chemical equilibrium among several ionized forms, including zwitterions, that depends on the pH (acidity) of the solution. Within the common pH range of foods, the prevailing ion can be described as −OOC-C()-()2-COO−, which has an electric charge of −1.
Only the glutamate ion is responsible for the umami flavor, so the effect does not depend significantly on the starting compound. However, some crystalline salts such as monosodium glutamate dissolve much better and faster than crystalline glutamic acid. This has proven to be an important factor in the implementation of substances as flavor enhancers.
Discovery
Although they occur naturally in many foods, glutamic acid and other amino acid flavor contributions were not scientifically identified until early in the twentieth century. In 1866, the German chemist Karl Heinrich Ritthausen discovered and identified the compound. In 1907, Japanese researcher Kikunae Ikeda of the Tokyo Imperial University identified brown crystals left behind after the evaporation of a large amount of kombu broth as glutamic acid. These crystals, when tasted, reproduced the ineffable but undeniable flavor detected in many foods, especially seaweed. Professor Ikeda coined the term umami for this flavor. He then patented a method of mass-producing the crystalline salt of glutamic acid known as monosodium glutamate.
Isomers
Further research into the compound has found that only the L-glutamate enantiomer has flavor-enhancing properties. Manufactured monosodium glutamate consists to over 99.6% of the naturally predominant L-glutamate form, which is a higher proportion of L-glutamate than can be found in the free glutamate ions of fermented naturally occurring foods. Fermented products such as soy sauce, steak sauce, and Worcestershire sauce have levels of glutamate similar to those in foods with added monosodium glutamate. However, 5% or more of the glutamate may be the D-enantiomer. Nonfermented naturally occurring foods have lower relative levels of D-glutamate than fermented products do.
Taste perception
Glutamic acid stimulates specific receptors located in taste buds such as the amino acid receptor T1R1/T1R3 or other glutamate receptors like the metabotropic receptors (mGluR4 and mGluR1), which induce the flavor known as umami. This is classified as one of the five basic tastes (the word "umami" is a loanword from Japanese; it is also referred to as "savory" or "meaty").
The flavoring effect of glutamate comes from its free form, in which it is not bound to other amino acids in protein. Nonetheless, glutamate by itself does not elicit an intense umami taste. The mixing of glutamate with nucleotides inosine-5'-monophosphate (IMP) or guanosine-5'-monophosphate (GMP) enhances the taste of umami; T1R1 and T1R3 respond primarily to mixtures of glutamate and nucleotides. While research has shown that this synergism occurs in some animal species with other amino acids, studies of human taste receptors show that the same reaction only occurs between glutamate and the selected nucleotides. Moreover, sodium in monosodium glutamate may activate glutamate to produce a stronger umami taste.
Two hypotheses for the explanation of umami taste transduction have been introduced: the first posits that the umami taste is transduced by an N-methyl-D-aspartate (NMDA) type glutamate ion channel receptor; the second posits that the taste is transduced by a metabotropic type glutamate receptor (taste-mGluR4). The metabotropic glutamate receptors such as mGluR4 and mGluR1 can be easily activated at glutamate concentration levels found in food.
Perceptual independence from salty and sweet taste
Since all umami taste compounds are sodium salts, the perceptual differentiation of salty and umami tastes has been difficult in taste tests and studies have found as much as 27% of certain populations may be umami "hypotasters".
Furthermore single glutamate(glutamic acid) with no table salt ions(Na+) elicits sour taste and in psychophysical tests, sodium or potassium salt cations seem to be required to produce a perceptible umami taste.
Sweet and umami tastes both utilize the taste receptor subunit T1R3, with salt taste blockers reducing discrimination between monosodium glutamate and sucrose in rodents.
If umami doesn't have perceptual independence, it could be classified with other tastes like fat, carbohydrate, metallic, and calcium, which can be perceived at high concentrations but may not offer a prominent taste experience.
Sources
Natural occurrence
Glutamate is ubiquitous in biological life. It is found naturally in all living cells, primarily in the bound form as a constituent of proteins. Only a fraction of the glutamate in foods is in its "free" form, and only free glutamate produces an umami flavor in foods. The savory flavor of tomatoes, fermented soy products, yeast extracts, certain sharp cheeses, and fermented or hydrolyzed protein products (such as soy sauce and fermented bean paste) is partially due to the presence of free glutamate ions.
Asia
Japanese cuisine originally used broth made from kombu (kelp) to produce the umami taste in soups.
Rome
In the Roman Empire glutamic acid was found in a sauce called garum, made from fermenting fish in saltwater. The flavor enhancing properties of glutamic acid allowed Romans to reduce the use of expensive salt.
Concentration in foods
The following table illustrates the glutamate content of some selected common foods. Free glutamate is the form directly tasted and absorbed whereas glutamate bound in protein is not available until further breakdown by digestion or cooking. In general, vegetables contain more free glutamate but less protein-bound glutamate.
Hydrolyzed protein
Hydrolyzed proteins, or protein hydrolysates, are acid- or enzymatically treated proteins from certain foods. One example is yeast extract. Hydrolyzed protein contains free amino acids, such as glutamate, at levels of 5% to 20%. Hydrolyzed protein is used in the same manner as monosodium glutamate in many foods, such as canned vegetables, soups, and processed meats.
Pure salts
Manufacturers, such as Ajinomoto, use selected strains of Corynebacterium glutamicum bacteria in a nutrient-rich medium. The bacteria are selected for their ability to excrete glutamic acid, which is then separated from the nutrient medium and processed into its sodium salt, monosodium glutamate.
Safety as a flavor enhancer
Medical studies
Monosodium glutamate (MSG) is regarded as safe for consumption. An association between MSG consumption and a constellation of symptoms has not been demonstrated under rigorously controlled conditions. Techniques used to adequately control for experimental bias include a placebo-controlled double-blinded experimental design and the use of capsules to deliver the compound to mask the strong and unique after-taste of glutamates. Even though there are also reports of MSG sensitivity among a subset of the population, this has not been demonstrated in placebo‐controlled trials.
Social perceptions
Origin
The controversy surrounding the safety of MSG started with the publication of Robert Ho Man Kwok's correspondence letter titled "Chinese-Restaurant Syndrome" in the New England Journal of Medicine on 4 April 1968. In his letter, Kwok suggested several possible causes for symptoms that he experienced before he nominated MSG. This letter was initially met with insider satirical responses, often using race as prop for humorous effect, within the medical community. During the discursive uptake in media, the conversations were recontextualized as legitimate while the race-based motivations of the humor were not parsed, which replicated historical racial prejudices.
Despite the resulting public backlash, the Food and Drug Administration (FDA) did not remove MSG from their Generally Recognized as Safe list. In 1970, a National Research Council under the National Academy of Science, on behalf of the FDA, investigated MSG but concluded that MSG was safe for consumption.
Reactions
The controversy about MSG is tied to racial stereotypes against East Asian societies. Herein, specifically East Asian cuisine was targeted, whereas the widespread usage of MSG in Western processed food does not generate the same stigma. These kind of perceptions, such as the rhetoric of the so-called Chinese restaurant syndrome, have been attributed to xenophobic or racist biases.
Food historian Ian Mosby wrote that fear of MSG in Chinese food is part of the United States' long history of viewing the "exotic" cuisine of Asia as dangerous and dirty. In 2016, Anthony Bourdain stated in Parts Unknown that "I think MSG is good stuff ... You know what causes Chinese restaurant syndrome? Racism."
In 2020, Ajinomoto, the leading manufacturer of MSG, and others launched the #RedefineCRS campaign, in reference to the term "Chinese restaurant syndrome", to combat the misconceptions about MSG, saying they intended to highlight the xenophobic prejudice against East Asian cuisine and the scientific evidence. Following the campaign, Merriam-Webster announced it would review the term.
Regulations
Regulation timeline
In 1959, the U.S. Food and Drug Administration (FDA) classified monosodium glutamate as generally recognized as safe (GRAS). This action stemmed from the 1958 Food Additives Amendment to the Federal Food, Drug, and Cosmetic Act that required premarket approval for new food additives and led the FDA to promulgate regulations listing substances, such as monosodium glutamate, which have a history of safe use or are otherwise GRAS.
Since 1970, FDA has sponsored extensive reviews on the safety of monosodium glutamate, other glutamates, and hydrolyzed proteins, as part of an ongoing review of safety data on GRAS substances used in processed foods. One such review was by the Federation of American Societies for Experimental Biology (FASEB) Select Committee on GRAS Substances. In 1980, the committee concluded that monosodium glutamate was safe at current levels of use but recommended additional evaluation to determine monosodium glutamate's safety at significantly higher levels of consumption. Additional reports attempted to look at this.
In 1986, FDA's Advisory Committee on Hypersensitivity to Food Constituents concluded that monosodium glutamate poses no threat to the general public but that reactions of brief duration might occur in some people. Other reports have given the following findings:
The 1987 Joint Expert Committee on Food Additives of the United Nations Food and Agriculture Organization and the World Health Organization placed monosodium glutamate in the safest category of food ingredients.
A 1991 report by the European Community's (EC) Scientific Committee for Foods reaffirmed monosodium glutamate's safety and classified its "acceptable daily intake" as "not specified", the most favorable designation for a food ingredient. In addition, the EC Committee said, "Infants, including prematures, have been shown to metabolize glutamate as efficiently as adults and therefore do not display any special susceptibility to elevated oral intakes of glutamate." Legislation in effect since June 2013 classifies glutamic acid and glutamates as salt substitutes, seasonings, and condiments with a maximum level of consumption of 10g/kg expressed as glutamic acid.
European Union
Following the compulsory EU-food labeling law the use of glutamic acid and its salts has to be declared, and the name or E number of the salt has to be listed. Glutamic acid and its salts as food additives have the following E numbers: glutamic acid: E620, monosodium glutamate: E621, monopotassium glutamate: E622, calcium diglutamate: E623, monoammonium glutamate: E624, and magnesium diglutamate: E625. In the European Union, these substances are regarded as "flavor enhancers" and are not allowed to be added to milk, emulsified fat and oil, pasta, cocoa/chocolate products and fruit juice. The EU has not yet published an official NOAEL (no observable adverse effect level) for glutamate, but a 2006 consensus statement of a group of German experts drawing from animal studies was that a daily intake of glutamic acid of 6 grams per kilogram of body weight (6 g/kg/day) is safe. From human studies, the experts noted that doses as high as 147g/day produced no adverse effects in males when given for 30 days; in a male, this amount corresponds to 2.1 g per kg of body weight.
United States
In 1959, the Food and Drug Administration classified MSG as a "generally recognized as safe" (GRAS) food ingredient under the Federal Food, Drug, and Cosmetic Act. In 1986, FDA's Advisory Committee on Hypersensitivity to Food Constituents also found that MSG was generally safe, but that short-term reactions may occur in some people. To further investigate this matter, in 1992 the FDA contracted the Federation of American Societies for Experimental Biology (FASEB) to produce a detailed report, which was published in 1995. The FASEB report reaffirmed the safety of MSG when it is consumed at usual levels by the general population, and found no evidence of any connection between MSG and any serious long-term reactions.
Under 2003 U.S. Food and Drug Administration regulations, when monosodium glutamate is added to a food, it must be identified as "monosodium glutamate" in the label's ingredient list. Because glutamate is commonly found in food, primarily from protein sources, the FDA does not require foods and ingredients that contain glutamate as an inherent component to list it on the label. Examples include tomatoes, cheeses, meats, hydrolyzed protein products such as soy sauce, and autolyzed yeast extracts. These ingredients are to be declared on the label by their common or usual names. The term 'natural flavor' is now used by the food industry when using glutamic acid. Because of lack of regulation, it is impossible to determine what percentage of 'natural flavor' is actually glutamic acid.
The food additives disodium inosinate and disodium guanylate are usually used in synergy with monosodium glutamate-containing ingredients, and provide a likely indicator of the addition of glutamate to a product.
, the National Academy of Sciences Committee on Dietary Reference Intakes had not set a NOAEL or LOAEL for glutamate.
Australia and New Zealand
Standard 1.2.4 of the Australia New Zealand Food Standards Code requires the presence of monosodium glutamate as a food additive to be labeled. The label must bear the food additive class name (such as "flavor enhancer"), followed by either the name of the food additive (such as "MSG") or its International Numbering System (INS) number (e.g., "621").
Canada
The Canada Food Inspection Agency considers claims of "no MSG" or "MSG free" to be misleading and deceptive when other sources of free glutamates are present.
Ingredients
Forms of glutamic acid that can be added to food include:
Monosodium glutamate
Glutamic acid (E620), glutamate (E620)
Monopotassium glutamate (E622)
Calcium glutamate (E623)
Monoammonium glutamate (E624)
Magnesium glutamate (E625)
Sodium glutamate (E621)
The following are also rich sources of glutamic acid, and may be added for umami flavor:
Hydrolyzed vegetable protein
Autolyzed yeast, yeast extract, yeast food, and nutritional yeast
Cheese products, e.g. parmesan (1200 mg / 100 g)
Various savory fermented seasonings, including soy sauce and worcestershire sauce
(See for more examples.)
| Physical sciences | Glutamates | Chemistry |
6064895 | https://en.wikipedia.org/wiki/Sea%20snail | Sea snail | Sea snails are slow-moving marine gastropod molluscs, usually with visible external shells, such as whelk or abalone. They share the taxonomic class Gastropoda with slugs, which are distinguished from snails primarily by the absence of a visible shell.
Definition
Determining whether some gastropods should be called sea snails is not always easy. Some species that live in brackish water (such as certain neritids) can be listed as either freshwater snails or marine snails, and some species that live at or just above the high tide level (for example, species in the genus Truncatella) are sometimes considered to be sea snails and sometimes listed as land snails.
Anatomy
Sea snails are a very large and diverse group of animals. Most snails that live in salt water respire using a gill or gills; a few species, though, have a lung, are intertidal, and are active only at low tide when they can move around in the air. These air-breathing species include false limpets in the family Siphonariidae and another group of false limpets in the family Trimusculidae.
Many, but not all, sea snails have an operculum.
Shell
The shells of most species of sea snails are spirally coiled. Some, though, have conical shells, and these are often referred to by the common name of limpets. In one unusual family (Juliidae), the shell of the snail has become two hinged plates closely resembling those of a bivalve; this family is sometimes called the "bivalved gastropods".
Their shells are found in a variety of shapes and sizes, but are normally very small. Those living species of sea snails range in size from Syrinx aruanus, the largest living shelled gastropod species at , to minute species whose shells are less than 1 mm at adult size. Because the shells of sea snails are strong and durable in many cases, as a group they are well represented in the fossil record.
The shells of snails are complex and grow at different speeds. The speed of growth is affected by a few variables such as the temperature of the water, depth of the water, food present for the snail, as well as isotopic oxygen levels. By looking at the composition of aragonite in the growth layers of mollusks you can predict the size the mollusk shell can reach.
Taxonomy
2005 taxonomy
The following cladogram is an overview of the main clades of living gastropods based on the taxonomy of Bouchet & Rocroi (2005), with taxa that contain saltwater or brackish water species marked in boldface (some of the highlighted taxa consist entirely of marine species, but some of them also contain freshwater or land species.)
Clade Patellogastropoda
Clade Vetigastropoda
Clade Cocculiniformia
Clade Neritimorpha
Clade Cycloneritimorpha
Clade Caenogastropoda
Informal group Architaenioglossa
Clade Sorbeoconcha
Clade Hypsogastropoda
Clade Littorinimorpha
Informal group Ptenoglossa
Clade Neogastropoda
Clade Heterobranchia
Informal group Lower Heterobranchia
Informal group Opisthobranchia
Clade Cephalaspidea
Clade Thecosomata
Clade Gymnosomata
Clade Aplysiomorpha
Group Acochlidiacea
Clade Sacoglossa
Group Cylindrobullida
Clade Umbraculida
Clade Nudipleura
Clade Pleurobranchomorpha
Clade Nudibranchia
Clade Euctenidiacea
Clade Dexiarchia
Clade Pseudoeuctenidiacea
Clade Cladobranchia
Clade Euarminida
Clade Dendronotida
Clade Aeolidida
Informal group Pulmonata
Informal group Basommatophora
Clade Eupulmonata
Clade Systellommatophora
Clade Stylommatophora
Clade Elasmognatha
Clade Orthurethra
Informal group Sigmurethra
Uses
By humans
A number of species of sea snails are harvested in aquaculture and used by humans for food, including abalone, conch, limpets, whelks (such as the North American Busycon species and the North Atlantic Buccinum undatum) and periwinkles including Littorina littorea.
The shells of sea snails are often found washed up on beaches. Because many are attractive and durable, they have been used to make necklaces and other jewelry since prehistoric times.
The shells of a few species of large sea snails within the Vetigastropoda have a thick layer of nacre and have been used as a source of mother of pearl. Historically, the button industry relied on these species for a number of years.
By non-human animals
The shells of sea snails are used for protection by many kinds of hermit crabs. A hermit crab carries the shell by grasping the central columella of the shell using claspers on the tip of its abdomen.
| Biology and health sciences | Gastropods | Animals |
10304194 | https://en.wikipedia.org/wiki/Daytime | Daytime | Daytime or day as observed on Earth is the period of the day during which a given location experiences natural illumination from direct sunlight. Daytime occurs when the Sun appears above the local horizon, that is, anywhere on the globe's hemisphere facing the Sun. In direct sunlight the movement of the sun can be recorded and observed using a sundial that casts a shadow that slowly moves during the day. Other planets and natural satellites that rotate relative to a luminous primary body, such as a local star, also experience daytime, but this article primarily discusses daytime on Earth.
Very broadly, most humans tend to be awake during some of the daytime period at their location, and asleep during some of the night period.
Characteristics
Approximately half of Earth is illuminated at any time by the Sun. The area subjected to direct illumination is almost exactly half the planet; but because of atmospheric and other effects that extend the reach of indirect illumination, the area of the planet covered by either direct or indirect illumination amounts to slightly more than half the surface.
The hemisphere of Earth experiencing daytime at any given instant changes continuously as the planet rotates on its own axis. The axis of the Earth's rotation is not perpendicular to the plane of its orbit around the Sun (which is parallel with the direction of sunlight), and so the length of the daytime period varies from one point on the planet to another. Additionally, since the axis of rotation is relatively fixed in comparison to the stars, it moves with respect to the Sun as the planet orbits the star. This creates seasonal variations in the length of the daytime period at most points on the planet's surface.
The period of daytime from the standpoint of a surface observer is roughly defined as the period between sunrise, when the Earth's rotation towards the east first causes the Sun's disc to appear above the horizon, to sunset, when the continuing rotation of the Earth causes the Sun's disc to disappear below the horizon to the west. Because the Sun is a luminous disc as seen from the Earth, rather than a point source of light, sunrise and sunset are not instantaneous and the exact definition of both can vary with context. Additionally, the Earth's atmosphere further bends and diffuses light from the Sun and lengthens the period of sunrise and sunset. For a certain period after sunset and before sunrise, indirect light from the Sun lightens the sky on Earth; this period is often referred to as twilight. Certain groups, such as Earthly astronomers, do not consider daytime to be truly ended until the Sun's disc is actually well below the Earth's horizon, because of this indirect illumination.
Daytime length variations with latitude and seasons
Daytime length or daytime duration is the time elapsed between beginning and end of the daytime period.
Given that Earth's own axis of rotation is tilted 23.44° to the line perpendicular to its orbital plane, called the ecliptic, the length of daytime varies with the seasons on the planet's surface, depending on the observer's latitude. Areas tilted toward the Sun are experiencing summer. Their tilt toward the Sun leads to more than half of the day seeing daylight and warmer temperatures, due to the higher directness of solar rays, the longer period of daytime itself, and less absorption of sunlight in the atmosphere. While increased daylight can have some effect on the higher temperatures in the summer, most of temperature rise results from the directness of the Sun, not the increased daylight. The high angles (around the zenith) of the Sun causes the tropics to be warm, while low angles (barely above the horizon) causes the polar regions to be cold. The slight effect of daylight hours on average seasonal temperature can be seen with the poles and tropical regions. The poles are still cold during their respective summers, despite seeing 24 hours of daylight for six months, while the Equator remains warm throughout the year, with only 12 hours of daylight per day.
Although the daytime length at the Equator remains 12 hours in all seasons, the duration at all other latitudes varies with the seasons. During the winter, daytime lasts shorter than 12 hours; during the summer, it lasts longer than 12 hours. Northern winter and southern summer concur, while northern summer and southern winter concur.
At the Equator
At the Equator, the daytime period always lasts about 12 hours, regardless of season. As viewed from the Equator, the Sun always rises and sets roughly vertically, following an apparent path close to perpendicular to the horizon.
From the March equinox to the September equinox, the Sun rises within 23.44° north of due east, and sets within 23.44° north of due west. From the September equinox to the March equinox, the Sun rises within 23.44° south of due east and sets within 23.44° south of due west. The Sun's path lies entirely in the northern half of the celestial sphere from the March equinox to the September equinox, but lies entirely in the southern half of the celestial sphere from the September equinox to the March equinox. On the equinoxes, the equatorial Sun culminates at the zenith, passing directly overhead at solar noon.
The fact that the equatorial Sun is always so close to the zenith at solar noon explains why the tropical zone contains the warmest regions on the planet overall. Additionally, the Equator sees the shortest sunrise or sunset because the Sun's path across the sky is so nearly perpendicular to the horizon. On the equinoxes, the solar disk takes only two minutes to traverse the horizon (from top to bottom at sunrise and from bottom to top at sunset).
In the tropics
The tropics occupy a zone of Earth's surface between 23.44° north and 23.44° south of the Equator. Within this zone, the Sun will pass almost directly overhead (or culminate) on at least one day per year. The line of 23.44° north latitude is called the Tropic of Cancer, because when it was named, the Sun passed overhead at this location at the time of year when it was near the constellation of Cancer. The equivalent line of south latitude is called the Tropic of Capricorn, for similar reasons. The sun enters and leaves each zodiacal constellation slightly later each year at the rate of about 1 day every 72 years. For more information, see precession of the equinoxes.
On the Tropical Circles, the Sun is directly overhead only once per year, on the corresponding solstice. At latitudes closer to the Equator and on the Equator itself, it will be overhead twice per year (on the equinoxes in the case of the Equator), leading to the Lahaina Noon or zero shadow day phenomenon. Outside the tropics, the Sun never passes directly overhead.
Around the poles
Around the poles, which coincide with the rotational axis of Earth as it passes through the surface, the seasonal variations in the length of daytime are extreme. In fact, within 23.44° latitude of the poles, there will be at least some days each year during which the sun never goes below the horizon. There will also be days when the Sun never rises above the horizon. This number will be fewer, but close to the number of days in the summer where the sun doesn't set (for example the sunrise is usually a few days before the spring equinox and extends a few days past the fall equinox). This phenomenon of more daylight than night is not unique to the poles. In fact, at any given time slightly more than half of the earth is in daylight. The 24 hours of summer daylight is known as the midnight sun that is famous in some northern countries. To the north, the Arctic Circle marks this 23.44° boundary. To the south, the Antarctic Circle marks the boundary. These boundaries correspond to 66.56° north or south latitude, respectively. Because the sky is still bright and stars can't be seen when the sun is less than 6 degrees under the horizon, 24-hour nights with stars visible all the time only happen beyond 72°34' north or south latitude.
At and near the poles, the Sun never rises very high above the horizon, even in summer, which is one of reasons why these regions of the world are consistently cold in all seasons (others include the effect of albedo, the relative increased reflection of solar radiation of snow and ice). Even at the summer solstice, when the Sun reaches its highest point above the horizon at noon, it is still only 23.44° above the horizon at the poles. Additionally, as one approaches the poles the apparent path of the Sun through the sky each day diverges increasingly from the vertical. As summer approaches, the Sun rises and sets become more northerly in the north and more southerly in the south. At the poles, the path of the Sun is indeed a circle, which is roughly equidistant above the horizon for the entire duration of the daytime period on any given day. The circle gradually sinks below the horizon as winter approaches, and gradually rises above it as summer approaches. At the poles, apparent sunrise and sunset may last for several days.
At middle latitudes
At middle latitudes, far from both the Equator and the poles, variations in the length of daytime are moderate. In the higher middle latitudes where Montreal, Paris and Ushuaia are located, the difference in the length of the day from summer to winter can be very noticeable: the sky may still be lit at 10 pm in summer, but may be dark at 5 pm in winter. In the lower middle latitudes where southern California, Egypt and South Africa are located, the seasonal difference is smaller, but still results in approximately 4 hours difference in daylight between the winter and summer solstices. The difference becomes less pronounced the closer one gets to the equator. An approximation to the monthly change can be obtained from the rule of twelfths.
Variations in solar noon
The exact instant of solar noon, when the Sun reaches its highest point in the sky, varies with the seasons. This variation is called the equation of time; the magnitude of variation is about 30 minutes over the course of a year.
| Physical sciences | Celestial mechanics | Astronomy |
10308785 | https://en.wikipedia.org/wiki/Differentiation%20rules | Differentiation rules | This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.
Elementary rules of differentiation
Unless otherwise stated, all functions are functions of real numbers (R) that return real values; although more generally, the formulae below apply wherever they are well defined — including the case of complex numbers (C).
Constant term rule
For any value of , where , if is the constant function given by , then .
Proof
Let and . By the definition of the derivative,
This shows that the derivative of any constant function is 0.
Intuitive (geometric) explanation
The derivative of the function at a point is the slope of the line tangent to the curve at the point. Slope of the constant function is zero, because the tangent line to the constant function is horizontal and its angle is zero.
In other words, the value of the constant function, y, will not change as the value of x increases or decreases.
Differentiation is linear
For any functions and and any real numbers and , the derivative of the function with respect to is:
In Leibniz's notation this is written as:
Special cases include:
The constant factor rule
The sum rule
The difference rule
The product rule
For the functions and , the derivative of the function with respect to is
In Leibniz's notation this is written
The chain rule
The derivative of the function is
In Leibniz's notation, this is written as:
often abridged to
Focusing on the notion of maps, and the differential being a map , this is written in a more concise way as:
The inverse function rule
If the function has an inverse function , meaning that and then
In Leibniz notation, this is written as
Power laws, polynomials, quotients, and reciprocals
The polynomial or elementary power rule
If , for any real number then
When this becomes the special case that if then
Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial.
The reciprocal rule
The derivative of for any (nonvanishing) function is:
wherever is non-zero.
In Leibniz's notation, this is written
The reciprocal rule can be derived either from the quotient rule, or from the combination of power rule and chain rule.
The quotient rule
If and are functions, then:
wherever is nonzero.
This can be derived from the product rule and the reciprocal rule.
Generalized power rule
The elementary power rule generalizes considerably. The most general power rule is the functional power rule: for any functions and ,
wherever both sides are well defined.
Special cases
If , then when is any non-zero real number and is positive.
The reciprocal rule may be derived as the special case where .
Derivatives of exponential and logarithmic functions
the equation above is true for all , but the derivative for yields a complex number.
the equation above is also true for all , but yields a complex number if .
where is the Lambert W function
Logarithmic derivatives
The logarithmic derivative is another way of stating the rule for differentiating the logarithm of a function (using the chain rule):
wherever is positive.
Logarithmic differentiation is a technique which uses logarithms and its differentiation rules to simplify certain expressions before actually applying the derivative.
Logarithms can be used to remove exponents, convert products into sums, and convert division into subtraction — each of which may lead to a simplified expression for taking derivatives.
Derivatives of trigonometric functions
The derivatives in the table above are for when the range of the inverse secant is and when the range of the inverse cosecant is
It is common to additionally define an inverse tangent function with two arguments, Its value lies in the range and reflects the quadrant of the point For the first and fourth quadrant (i.e. ) one has Its partial derivatives are
Derivatives of hyperbolic functions
See Hyperbolic functions for restrictions on these derivatives.
Derivatives of special functions
Gamma function
with being the digamma function, expressed by the parenthesized expression to the right of in the line above.
Riemann zeta function
Derivatives of integrals
Suppose that it is required to differentiate with respect to x the function
where the functions and are both continuous in both and in some region of the plane, including , and the functions and are both continuous and both have continuous derivatives for . Then for :
This formula is the general form of the Leibniz integral rule and can be derived using the
fundamental theorem of calculus.
Derivatives to nth order
Some rules exist for computing the -th derivative of functions, where is a positive integer. These include:
Faà di Bruno's formula
If and are -times differentiable, then
where and the set consists of all non-negative integer solutions of the Diophantine equation .
General Leibniz rule
If and are -times differentiable, then
| Mathematics | Differential calculus | null |
436034 | https://en.wikipedia.org/wiki/Kleptomania | Kleptomania | Kleptomania is the inability to resist the urge to steal items, usually for reasons other than personal use or financial gain. First described in 1816, kleptomania is classified in psychiatry as an impulse control disorder. Some of the main characteristics of the disorder suggest that kleptomania could be an obsessive-compulsive spectrum disorder, but also share similarities with addictive and mood disorders.
The disorder is frequently under-diagnosed and is regularly associated with other psychiatric disorders, particularly anxiety, eating disorders, alcohol and substance use. Patients with kleptomania are typically treated with therapies in other areas due to the comorbid grievances rather than issues directly related to kleptomania.
Over the last 100 years, a shift from psychotherapeutic to psychopharmacological interventions for kleptomania has occurred. Pharmacological treatments using selective serotonin reuptake inhibitors (SSRIs), mood stabilizers and opioid receptor antagonists, and other antidepressants along with cognitive behavioral therapy, have yielded positive results. However, there have also been reports of kleptomania induced by selective serotonin reuptake inhibitors (SSRIs).
Signs and symptoms
Some of the fundamental components of kleptomania include recurring intrusive thoughts, impotence to resist the compulsion to engage in stealing, and the release of internal pressure following the act. These symptoms suggest that kleptomania could be regarded as an obsessive-compulsive type of disorder.
People diagnosed with kleptomania often have other types of disorders involving mood, anxiety, eating, impulse control, and drug use. They also have great levels of stress, guilt, and remorse, and privacy issues accompanying the act of stealing. These signs are considered to either cause or intensify general comorbid disorders. The characteristics of the behaviors associated with stealing could result in other problems as well, which include social segregation and substance use. The many types of other disorders frequently occurring along with kleptomania usually make clinical diagnosis uncertain.
There is a difference between ordinary theft and kleptomania: "ordinary theft (whether planned or impulsive) is deliberate and motivated by the usefulness of the object or its monetary worth," whereas with kleptomania, there "is the recurrent failure to resist impulses to steal items even though the items are not needed for personal use or for their monetary value."
Cause
Initial models of the development of kleptomania came from the field of psychoanalysis. These have been replaced by cognitive-behavioral models, which supplement biological ones based mostly on pharmacotherapy treatment studies.
Psychoanalytic and psychodynamic approach
Several explanations of the mechanics of kleptomania have been presented. A contemporary social approach proposes that kleptomania is an outcome of consumerism and the large quantity of commodities in society. Psychodynamic theories depend on a variety of points of view in defining the disorder. Psychoanalysts define the condition as an indication of a defense mechanism deriving in the unconscious ego against anxiety, prohibited intuition or desires, unsettled struggle or forbidden sexual drives, dread of castration, sexual excitement, and sexual fulfillment and orgasm throughout the act of stealing. The psychoanalytic and psycho-dynamic approach to kleptomania granted the basis for prolonged psychoanalytic or psycho-dynamic psychotherapy as the core treatment method for a number of years. Like most psychiatric conditions, kleptomania was observed within the psycho-dynamic lens instead of being viewed as a bio-medical disorder. However, the prevalence of psychoanalytic approach contributed to the growth of other approaches, particularly in the biological domain.
Many psychoanalytic theorists suggested that kleptomania is a person's attempt "to obtain symbolic compensation for an actual or anticipated loss", and feel that the key to understanding its etiology lies in the symbolic meaning of the stolen items. Drive theory was used to propose that the act of stealing is a defense mechanism which serves to modulate or keep undesirable feelings or emotions from being expressed. Some French psychiatrists suggest that kleptomaniacs may just want the item that they steal and the feeling they get from theft itself.
Cognitive-behavioral models
Cognitive-behavioral models have been replacing psychoanalytic models in describing the development of kleptomania. Cognitive-behavioral practitioners often conceptualize the disorders as being the result of operant conditioning, behavioral chaining, distorted cognitions, and poor coping mechanisms. Cognitive-behavioral models suggest that the behavior is positively reinforced after the person steals some items. If this individual experiences minimal or no negative consequences (punishment), then the likelihood that the behavior will reoccur is increased. As the behavior continues to occur, stronger antecedents or cues become contingently linked with it, in what ultimately becomes a powerful behavioral chain. According to cognitive-behavioral theory (CBT), both antecedents and consequences may either be in the environment or cognitions. For example, Kohn and Antonuccio (2002) describe a client's antecedent cognitions, which include thoughts such as "I’m smarter than others and can get away with it"; "they deserve it"; "I want to prove to myself that I can do it"; and "my family deserves to have better things". These thoughts were strong cues to stealing behaviors. All of these thoughts were precipitated by additional antecedents which were thoughts about family, financial, and work stressors or feelings of depression. "Maintaining" cognitions provided additional reinforcement for stealing behaviors and included feelings of vindication and pride, for example: "score one for the 'little guy' against the big corporations". Although those thoughts were often afterward accompanied by feelings of remorse, this came too late in the operant sequence to serve as a viable punisher. Eventually, individuals with kleptomania come to rely upon stealing as a way of coping with stressful situations and distressing feelings, which serve to further maintain the behavior and decrease the number of available alternative coping strategies.
Biological models
Biological models explaining the origins of kleptomania have been based mostly on pharmacotherapy treatment studies that used selective serotonin reuptake inhibitors (SSRIs), mood stabilizers, and opioid receptor antagonists.
Some studies using SSRIs have observed that opioid antagonists appear to reduce the urge to steal and mute the "rush" typically experienced immediately after stealing by some subjects with kleptomania. This would suggest that poor regulation of serotonin, dopamine, and/or natural opioids within the brain are to blame for kleptomania, linking it with impulse control and affective disorders.
An alternative explanation too based on opioid antagonist studies states that kleptomania is similar to the "self-medication" model, in which stealing stimulates the person's natural opioid system. "The opioid release 'soothes' the patients, treats their sadness, or reduces their anxiety. Thus, stealing is a mechanism to relieve oneself from a chronic state of hyperarousal, perhaps produced by prior stressful or traumatic events, and thereby modulate affective states."
Diagnosis
Disagreement surrounds the method by which kleptomania is considered and diagnosed. On one hand, some researchers believe that kleptomania is merely theft and dispute the suggestion that there are psychological mechanisms involved, while others observe kleptomania as part of a substance-related addiction. Yet others categorize kleptomania as a variation of an impulse control disorder, such as obsessive-compulsive disorder or eating disorders.
According to the Diagnostic and Statistical Manual of Mental Disorders fourth edition (DSM IV-TR), a frequent and widely used guide for the diagnosis of mental disorders, the following symptoms and characteristics are the diagnostic criteria for kleptomania:
repeated inability to defend against urges to steal things that are not essential for private use or for their economic value;
escalating sense of pressure immediately prior to performing the theft;
satisfaction, fulfillment or relief at the point of performing the theft;
the theft is not executed to convey antagonism or revenge, and is not in reaction to a delusion or a fantasy; and
the thieving is not better accounted for by behavior disorder, a manic episode, or antisocial personality disorder.
Skeptics have decried kleptomania as an invalid psychiatric concept exploited in legal defenses of wealthy female shoplifters. During the twentieth century, kleptomania was strongly linked with the increased prevalence of department stores, and "department store kleptomaniacs" were a widely held social stereotype that had political implications.
Comorbidity
Kleptomania seems to be linked with other psychiatric disorders, especially mood swings, anxiety, eating disorders, and alcohol and substance use. The occurrence of stealing as a behavior in conjunction with eating disorders, particularly bulimia nervosa, is frequently taken as a sign of the harshness of the eating disorder.
A likely connection between depression and kleptomania was reported as early as 1911. It has since been extensively established in clinical observations and available case reports. The mood disorder could come first or co-occur with the beginning of kleptomania. In advanced cases, depression may result in self-inflicted injury and could even lead to suicide. Some people have reported relief from depression or manic symptoms after theft.
It has been suggested that because kleptomania is linked to strong compulsive and impulsive qualities, it can be viewed as a variation of obsessive-compulsive spectrum disorders, together with pathological gambling, compulsive buying, pyromania, nailbiting and trichotillomania. This point achieves support from the unusually higher cases of obsessive-compulsive disorder (OCD; see below) in close relatives of patients with kleptomania.
Substance use disorder
Kleptomania and drug addictions seem to have central qualities in common, including:
recurring or compulsive participation in a behavior in spite of undesirable penalties;
weakened control over the disturbing behavior;
a need or desire condition before taking part in the problematic behavior; and
a positive pleasure-seeking condition throughout the act of the disturbing behavior.
Data from epidemiological studies additionally propose that there is an affiliation between kleptomania and substance use disorders along with high rates in a unidirectional manner. Phenomenological data maintain that there is a relationship between kleptomania and drug addictions. A higher percentage of cases of kleptomania has been noted in adolescents and young adults, and a lesser number of cases among older adults, which imply an analogous natural history to that seen in substance use disorders. Family history data also propose a probable common genetic input to alcohol use and kleptomania. Substance use disorders are more common in kin of persons with kleptomania than in the general population. Furthermore, pharmacological data (e.g., the probable efficacy of the opioid antagonist, naltrexone, in the treatment of both kleptomania and substance use disorders) could present additional support for a joint relationship between kleptomania and substance use disorders. Based on the idea that kleptomania and substance use disorders may share some etiological features, it could be concluded that kleptomania would react optimistically to the same treatments. As a matter of fact, certain non-medical treatment methods that are successful in treating substance use are also accommodating in treating kleptomania.
Obsessive-compulsive disorder
Kleptomania is frequently thought of as being a part of obsessive-compulsive disorder (OCD), since the irresistible and uncontrollable actions are similar to the frequently excessive, unnecessary, and unwanted rituals of OCD. Some individuals with kleptomania demonstrate hoarding symptoms that resemble those with OCD.
Prevalence rates between the two disorders do not demonstrate a strong relationship. Studies examining the comorbidity of OCD in subjects with kleptomania have inconsistent results, with some showing a relatively high co-occurrence (45%-60%) while others demonstrate low rates (0%-6.5%). Similarly, when rates of kleptomania have been examined in subjects with OCD, a relatively low co-occurrence was found (2.2%-5.9%).
Pyromania
Pyromania, another impulse disorder, has many ties to kleptomania. Many pyromaniacs begin fires alongside petty stealing which often appears similar to kleptomania.
Treatment
Although the disorder has been known to psychologists for a long time, the cause of kleptomania is still ambiguous. Therefore, a diverse range of therapeutic approaches have been introduced for its treatment. These treatments include: psychoanalytic oriented psychotherapy, behavioral therapy, and pharmacotherapy.
Behavioral and cognitive intervention
Cognitive-behavioural therapy (CBT) has primarily substituted the psychoanalytic and dynamic approach in the treatment of kleptomania. Numerous behavioural approaches have been recommended as helpful according to several cases stated in the literature. They include: hidden sensitisation by unpleasant images of nausea and vomiting, aversion therapy (for example, aversive holding of breath to achieve a slightly painful feeling every time a desire to steal or the act is imagined), and systematic desensitisation. In certain instances, the use of combining several methods such as hidden sensitisation along with exposure and response prevention were applied. Even though the approaches used in CBT need more research and investigation in kleptomania, success in combining these methods with medication was illustrated over the use of drug treatment as the single method of treatment.
Drug treatment
The phenomenological similarity and the suggested common basic biological dynamics of kleptomania and OCD, pathological gambling and trichotillomania gave rise to the theory that the similar groups of medications could be used in all these conditions. Consequently, the primary use of selective serotonin reuptake inhibitor (SSRI) group, which is a form of antidepressant, has been used in kleptomania and other impulse control disorders such as binge eating and OCD. Electroconvulsive therapy (ECT), lithium and valproic acid (sodium valproate) have been used as well.
The SSRI's usage is due to the assumption that the biological dynamics of these conditions derives from low levels of serotonin in brain synapses, and that the efficacy of this type of therapy will be relevant to kleptomania and to other comorbid conditions.
Opioid receptor antagonists are regarded as practical in lessening urge-related symptoms, which is a central part of impulse control disorders; for this reason, they are used in treatment of substance use. This quality makes them helpful in treating kleptomania and impulse control disorders in general. The most frequently used drug is naltrexone, a long-acting competitive antagonist. Naltrexone acts mainly at μ-receptors, but also antagonises κ- and λ-receptors.
There have been no controlled studies of the psycho-pharmacological treatment of kleptomania. This could be as a consequence of kleptomania being a rare phenomenon and the difficulty in achieving a large enough sample. Facts about this issue come largely from case reports or from bits and pieces gathered from a comparatively small number of cases enclosed in a group series.
History
In the nineteenth century, French psychiatrists began to observe kleptomaniacal behavior, but were constrained by their approach. By 1890, a large body of case material on kleptomania had been developed. Hysteria, imbecility, cerebral defect, and menopause were advanced as theories to explain these seemingly nonsensical behaviors, and many linked kleptomania to immaturity, given the inclination of young children to take whatever they want. These French and German observations later became central to psychoanalytic explanations of kleptomania.
Etymology
The term kleptomania was derived from the Greek words κλέπτω (klepto) "to steal" and μανία (mania) "mad desire, compulsion". Its meaning roughly corresponds to "compulsion to steal" or "compulsive stealing".
First generation of psychoanalysis
In the early twentieth century, kleptomania was viewed more as a legal excuse for self-indulgent haut bourgeois ladies than a valid psychiatric ailment by French psychiatrists.
Sigmund Freud, the creator of controversial psychoanalytic theory, believed that the underlying dynamics of human behaviours associated with uncivilized savages—impulses were curbed by inhibitions for social life. He did not believe human behaviour to be rational. He created a large theoretical corpus which his disciples applied to such psychological problems as kleptomania. In 1924, one of his followers, Wilhelm Stekel, read the case of a female kleptomaniac who was driven by suppressed sexual urges to take hold of "something forbidden, secretly". Stekel concluded that kleptomania was "suppressed and superseded sexual desire carried out through medium of a symbol or symbolic action. Every compulsion in psychic life is brought about by suppression".
Second generation of psychoanalysis
Fritz Wittels argued that kleptomaniacs were sexually underdeveloped people who felt deprived of love and had little experience with human sexual relationships; stealing was their sex life, giving them thrills so powerful that they did not want to be cured. Male kleptomaniacs, in his view, were homosexual or invariably effeminate.
A famous large-scale analysis of shoplifters in the United Kingdom ridiculed Stekel's notion of sexual symbolism and claimed that one out of five apprehended shoplifters was a "psychiatric".
New perspectives
Empirically based conceptual articles have argued that kleptomania is becoming more common than previously thought, and occurs more frequently among women than men. These ideas are new in recent history but echo those current in the mid to late nineteenth century.
In popular culture
Movies
Mary and Max (2009)
Klepto (2003)
Kleptomania (1993)
Series
Trinkets (2019)
Breaking Bad (Marie Schrader)(2008)
Books
Hotel 21 (2023)
Trinkets (2013)
| Biology and health sciences | Mental disorders | Health |
436114 | https://en.wikipedia.org/wiki/Cupressus | Cupressus | Cupressus is one of several genera of evergreen conifers within the family Cupressaceae that have the common name cypress; for the others, see cypress. It is considered a polyphyletic group. Based on genetic and morphological analysis, the genus Cupressus is found in the subfamily Cupressoideae. The common name "cypress" comes via the Old French from the Latin , which is the latinisation of the Greek κυπάρισσος (kypárissos).
Description
Cypress are evergreen trees or large shrubs, growing to tall, exceptionally up to 102 m tall (the second-tallest tree species on earth, after Sequoia sempervirens) in Cupressus austrotibetica. The leaves are scale-like, 2–6 mm long, arranged in opposite decussate pairs, and persist for three to five years. On young plants up to two years old, the leaves are needle-like and 5–15 mm long. The cones are 8–40 mm long, globose or ovoid with 4 to 14 scales arranged in opposite decussate pairs; they are mature in 18–24 months from pollination. The seeds are small, 4–7 mm long, with two narrow wings, one along each side of the seed.
Many of the species are adapted to forest fires, holding their seeds for many years in closed cones until the parent trees are killed by a fire; the seeds are then released to colonise the bare, burnt ground. In other species, the cones open at maturity to release the seeds.
Distribution
As currently treated, these cypresses are native to scattered localities in mainly warm temperate regions in the Northern Hemisphere, including northwest Africa, the Middle East, the Himalayas, southern China and northern Vietnam. As with other conifers, extensive cultivation has led to a wide variety of forms, sizes and colours, that are grown in parks and gardens throughout the world.
Cultivation
Many species of cypress are grown as decorative trees in parks and, in Asia, around temples; in some areas, the native distribution is hard to discern due to extensive cultivation. A few species are grown for their timber, which can be very durable. The fast-growing hybrid Leyland cypress (Cupressus × leylandii), much used in gardens, draws one of its parents from this genus (Cupressus macrocarpa, Monterey cypress); the other parent, Callitropsis nootkatensis (Nootka cypress), is also sometimes classified in this genus, or else in the separate genus Xanthocyparis, but in the past more usually in Chamaecyparis.
Cultural references
It was believed in the Hellenic culture that the cypress tree was sacred to the gods and it is now used as an emblem of grief.
The name of the genus comes from Cyparissus, a young man loved by Apollo, very attached to a deer which he ended up killing by mistake during a hunting trip. To ease the pain Apollo transformed the boy into a plant.
The association with mourning continued in Roman times, up to the present day, also for a practical reason: the roots of the cypress are straight into the ground, and expand slightly laterally, not damaging the burials.
Taxonomy
There has long been significant uncertainty about the New World members of Cupressus, with several studies recovering them as forming a distinct clade from the Old World members. A 2021 molecular study found Cupressus to be the sister genus to Juniperus, whereas the western members (classified in Callitropsis and Hesperocyparis) were found to be sister to Xanthocyparis.
Phylogeny
Species
The number of species recognised within this genus varies sharply, from 16 to 25 or more according to the authority followed, because most populations are small and isolated, and whether they should be accorded specific, subspecific or varietal rank is difficult to ascertain. Current tendencies are to reduce the number of recognised species; when a narrow species concept is adopted, the varieties indented in the list below may also be accepted as distinct species. | Biology and health sciences | Gymnosperms | null |
436136 | https://en.wikipedia.org/wiki/Hesperocyparis%20macrocarpa | Hesperocyparis macrocarpa | Hesperocyparis macrocarpa also known as Cupressus macrocarpa, or the Monterey cypress is a coniferous tree, and is one of several species of cypress trees endemic to California.
The Monterey cypress is found naturally only on the Central Coast of California. Due to being a glacial relict, the natural distributional range of the species during modern times is confined to two small relict populations near Carmel, California, at Cypress Point in Pebble Beach and at Point Lobos. Historically during the peak of the last ice age, Monterey cypress would have likely comprised a much larger forest that extended much further north and south.
Description
Hesperocyparis macrocarpa is a medium-sized coniferous evergreen tree, which often becomes irregular and flat-topped as a result of the strong winds that are typical of its native area. It grows to heights of up to 40 meters (133 feet) in perfect growing conditions, and its trunk diameter can reach 2.5 meters (over 8 feet). The foliage grows in dense sprays which are bright green in color and release a deep lemony aroma when crushed. The leaves are scale-like, 2–5 mm long, and produced on rounded (not flattened) shoots; seedlings up to a year old have needle-like leaves 4–8 mm long.
The seed cones are globose to oblong, 20–40 mm long, with 6–14 scales, green at first, maturing brown about 20–24 months after pollination. The pollen cones are 3–5 mm long, and release their pollen in late winter or early spring. The Latin specific epithet macrocarpa means "with large fruit".
Because of the large trunk size some trees develop, people have assumed that individual H. macrocarpa trees may be up to 2,000 years old. However, the longest-lived report based on physical evidence is only 284 years old. The renowned Californian botanist Willis Linn Jepson wrote that "the advertisement of [C. macrocarpa trees] in seaside literature as 1,000 to 2,000 years old does not ... rest upon any actual data, and probably represents a desire to minister to a popular craving for superlatives". Few trees survive beyond 100 years. As a counterpoint to this, many of the earliest introductions of the species into New Zealand around 1860 still survive and the major cause of mortality of these cultivated specimens is felling. One such example is the 160 year old St. Barnabas Church tree in Stoke, Nelson, New Zealand.
Taxonomy
Hesperocyparis macrocarpa was given its first scientific description by the German botanist Karl Theodor Hartweg with the name Cupressus macrocarpa. Hartweg's trip to California coincided with the Mexican–American War. He observed in his report, "Under these circumstances I cannot venture far away from Monterey, nor is it advisable that I should do so, as I might fall in with a party of country people, who could not be persuaded that a person would come all the way from London to look after weeds, which in their opinion are not worth picking up, but might suppose that I have some political object in view; I, therefore, confine my excursions within a few miles of the town." In July 1846 he observed the Monterey cypress trees and named them, though his paper was not received in London until 10 May of the following year.
Along with other New World Cupressus species, it has recently been transferred to the genus Hesperocyparis, on genetic evidence that the New World Cupressus (NWC) are not very closely related to the Old World Cupressus (OWC) species.
Hesperocyparis macrocarpa is a paleoendemic, with fossilized remains discovered in Drakes Bay and Rancho La Brea evidencing a much larger extent in the past.
Phylogenetic analysis of nuclear DNA sequences and organismic data recover distinct lineages, with the NWC being sister to Juniperus or Juniperus and the OWC. However, chloroplast sequences sometimes place both OWC and NWC with a common ancestor, possibly due to ancient hybridization. Other more obvious morphological differences support their separation, such as the presence of 3 to 5 cotyledons in NWC, as opposed to 2 in Old World species, glaucous seed coats, and monomorphic leaves on ultimate branch segments.
Analysis of phylogenetic relationships show that the species is placed within the Macrocarpa clade, which diverged from the Arizonica clade, both within Hesperocyparis. The two clades are separated biogeographically by the Transverse Ranges, which forms a barrier to any north–south migration of most species within these clades.
Distribution
The two native cypress forest stands are protected, within Point Lobos State Natural Reserve and Del Monte Forest. The natural habitat is noted for its cool, moist summers, frequently enveloping the trees in sea fog.
This species has been widely planted outside its native range, particularly along the coasts of California and Oregon. Its European distribution includes Great Britain (including the Isle of Man and the Channel Islands), France, Ireland, Greece, Italy and Portugal. In New Zealand, plantings have naturalized, finding conditions there more favorable than in its native range. It has also been grown experimentally as a timber crop in Kenya.
The tree has been successfully planted in Sri Lanka, with a 130-year old specimen on view at the Hakgala Botanical Garden in Nuwara Eliya.
Hesperocyparis macrocarpa is also grown in South Africa. For example, a copse has been planted to commemorate South African infantrymen who died in the Allied cause in Italy and North Africa during World War 2. As in California, the Cape trees are gnarled and wind-sculpted.
Cultivation
Monterey cypress has been widely cultivated away from its native range, both elsewhere along the California coast, and in other areas with similar cool summer, mild winter oceanic climates. It was very early cultivated in the United Kingdom. In 1846 Karl Hartweg sent the Royal Horticultural Society seeds along with a report on his journeys in California. It is a popular private garden and public landscape tree in California. It is so widely planted in Golden Gate Park that the silhouette of the tree is sometimes printed as a symbol of the park.
When planted in areas with hot summers, for example in interior California away from the coastal fog belt, Monterey cypress has proved highly susceptible to cypress canker, caused by the fungus Seiridium cardinale, and rarely survives more than a few years. This disease is not a problem where summers are cool.
The foliage is slightly toxic to livestock and can cause miscarriages in cattle. Sawn logs are used by many craftspeople, some boat builders and small manufacturers, as a furniture structural material and a decorative wood because of its fine colours, though it must be preserved carefully to prevent the wood from splitting. It is also a fast, hot burning, albeit sparky (therefore not suited to open fires), firewood.
In Australasia
In Australia and New Zealand, Monterey cypress is most frequently grown as a windbreak tree on farms, usually in rows or shelter belts. It is also planted in New Zealand as an ornamental tree and, occasionally, as a timber tree. There, finding more favorable growing conditions than in its native range, and in the absence of many native pathogens, it often grows much larger, with trees recorded at over tall and in trunk diameter. One specimen – with a trunk diameter of more than – is considered to be the largest recorded single-stemmed specimen in the world. The timber of Monterey cypress was used for fence posts on New Zealand farms before electric fencing became popular.
Cultivars
A number of cultivars have been selected for garden use, including Goldcrest, with yellow-green, semi-juvenile foliage (with spreading scale-leaf tips) and Lutea with yellow-green foliage. Goldcrest has gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017).
Monterey cypress is one of the parents of the fast-growing cultivated hybrid Leyland cypress, Cupressus × Leylandii, the other parent being Nootka cypress (Callitropsis nootkatensis).
Hesperocyparis macrocarpa cultivars grown in New Zealand are:
'Aurea Saligna'—long cascades of weeping, golden-yellow, thread-like foliage on a pyramidal tree
'Brunniana Aurea'—pillar or conical form with soft rich-golden foliage
'Gold Rocket'—narrow erect form with golden colouring, slow-growing
'Golden Pillar'—compact conical tree with dense yellow shoots and foliage
'Greenstead Magnificent'—dwarf form with blue-green foliage
'Lambertiana Aurea'—hardy upright form tolerating poor soil and climate conditions
Chemistry
Isocupressic acid, a labdane diterpenoid, is an abortifacient component of H. macrocarpa. Monoterpenes (α- and γ-terpinene and terpinolene) are constituents of the foliage volatile oil. The oil exact composition is : α-pinene (20.2%), sabinene (12.0%), p-Cymene (7.0%) and terpinen-4-ol (29.6%). Unusual sesquiterpenes can be found in the foliage. Longiborneol (also known as juniperol or macrocarpol) can also be isolated from Monterey cypresses.
| Biology and health sciences | Cupressaceae | Plants |
436166 | https://en.wikipedia.org/wiki/Variable-frequency%20oscillator | Variable-frequency oscillator | A variable frequency oscillator (VFO) in electronics is an oscillator whose frequency can be tuned (i.e., varied) over some range. It is a necessary component in any tunable radio transmitter and in receivers that work by the superheterodyne principle. The oscillator controls the frequency to which the apparatus is tuned.
Purpose
In a simple superheterodyne receiver, the incoming radio frequency signal (at frequency ) from the antenna is mixed with the VFO output signal tuned to , producing an intermediate frequency (IF) signal that can be processed downstream to extract the modulated information. Depending on the receiver design, the IF signal frequency is chosen to be either the sum of the two frequencies at the mixer inputs (up-conversion), or more commonly, the difference frequency (down-conversion), .
In addition to the desired IF signal and its unwanted image (the mixing product of opposite sign above), the mixer output will also contain the two original frequencies, and and various harmonic combinations of the input signals. These undesired signals are rejected by the IF filter. If a double balanced mixer is employed, the input signals appearing at the mixer outputs are greatly attenuated, reducing the required complexity of the IF filter.
The advantage of using a VFO as a heterodyning oscillator is that only a small portion of the radio receiver (the sections before the mixer such as the preamplifier) need to have a wide bandwidth. The rest of the receiver can be finely tuned to the IF frequency.
In a direct-conversion receiver, the VFO is tuned to the same frequency as the incoming radio frequency and Hz. Demodulation takes place at baseband using low-pass filters and amplifiers.
In a radio frequency (RF) transmitter, VFOs are often used to tune the frequency of the output signal, often indirectly through a heterodyning process similar to that described above. Other uses include chirp generators for radar systems where the VFO is swept rapidly through a range of frequencies, timing signal generation for oscilloscopes and time domain reflectometers, and variable frequency audio generators used in musical instruments and audio test equipment.
Types
There are two main types of VFO in use: analog and digital.
Analog VFOs
An analog VFO is an electronic oscillator where the value of at least one of the passive components is adjustable under user control so as to alter its output frequency.
The passive component whose value is adjustable is usually a capacitor, but could be a variable inductor.
Tuning capacitor
The variable capacitor is a mechanical device in which the separation of a series of interleaved metal plates is physically altered to vary its capacitance. Adjustment of this capacitor is sometimes facilitated by a mechanical step-down gearbox to achieve fine tuning.
Varactor
A reversed-biased semiconductor diode exhibits capacitance. Since the width of its non-conducting depletion region depends on the magnitude of the reverse bias voltage, this voltage can be used to control the junction capacitance. The varactor bias voltage may be generated in a number of ways and there may need to be no significant moving parts in the final design.
Varactors have a number of disadvantages including temperature drift and aging, electronic noise, low Q factor and non-linearity.
Digital VFOs
Modern radio receivers and transmitters usually use some form of digital frequency synthesis to generate their VFO signal.
The advantages include smaller designs, lack of moving parts, the higher stability of set frequency reference oscillators, and the ease with which preset frequencies can be stored and manipulated in the digital computer that is usually embedded in the design in any case.
It is also possible for the radio to become extremely frequency-agile in that the control computer could alter the radio's tuned frequency many tens, thousands or even millions of times a second.
This capability allows communications receivers effectively to monitor many channels at once, perhaps using digital selective calling (DSC) techniques to decide when to open an audio output channel and alert users to incoming communications.
Pre-programmed frequency agility also forms the basis of some military radio encryption and stealth techniques.
Extreme frequency agility lies at the heart of spread spectrum techniques that have gained mainstream acceptance in computer wireless networking such as Wi-Fi.
There are disadvantages to digital synthesis such as the inability of a digital synthesiser to tune smoothly through all frequencies, but with the channelisation of many radio bands, this can also be seen as an advantage in that it prevents radios from operating in between two recognised channels.
Digital frequency synthesis relies on stable crystal controlled reference frequency sources. Crystal-controlled oscillators are more stable than inductively and capacitively controlled oscillators. Their disadvantage is that changing frequency (more than a small amount) requires changing the crystal, but frequency synthesizer techniques have made this unnecessary in modern designs.
Digital frequency synthesis
The electronic and digital techniques involved in this include:
Direct digital synthesis (DDS) Enough data points for a mathematical sine function are stored in digital memory. These are recalled at the right speed and fed to a digital-to-analog converter where the required sine wave is built up.
Direct frequency synthesis Early channelized communication radios had multiple crystals - one for each channel on which they could operate. After a while this thinking was combined with the basic ideas of heterodyning and mixing described under purpose above. Multiple crystals can be mixed in various combinations to produce various output frequencies.
Phase locked loop (PLL) Using a varactor-controlled or voltage-controlled oscillator (VCO) (described above in varactor under analog VFO techniques) and a phase detector, a control-loop can be set up so that the VCO's output is frequency-locked to a crystal-controlled reference oscillator. The phase detector's comparison is made between the outputs of the two oscillators after frequency division by different divisors. Then by altering the frequency-division divisor(s) under computer control, a variety of actual (undivided) VCO output frequencies can be generated. The PLL technique dominates most radio VFO designs today.
Performance
The quality metrics for a VFO include frequency stability, phase noise and spectral purity. All of these factors tend to be inversely proportional to the tuning circuit's Q factor. Since in general the tuning range is also inversely proportional to Q, these performance factors generally degrade as the VFO's frequency range is increased.
Stability
Stability is the measure of how far a VFO's output frequency drifts with time and temperature. To mitigate this problem, VFOs are generally "phase locked" to a stable reference oscillator. PLLs use negative feedback to correct for the frequency drift of the VFO allowing for both wide tuning range and good frequency stability.
Repeatability
Ideally, for the same control input to the VFO, the oscillator should generate exactly the same frequency. A change in the calibration of the VFO can change receiver tuning calibration; periodic re-alignment of a receiver may be needed. VFO's used as part of a phase-locked loop frequency synthesizer have less stringent requirements since the system is as stable as the crystal-controlled reference frequency.
Purity
A plot of a VFO's amplitude vs. frequency may show several peaks, probably harmonically related. Each of these peaks can potentially mix with some other incoming signal and produce a spurious response. These spurii (sometimes spelled spuriae) can result in increased noise or two signals detected where there should only be one. Additional components can be added to a VFO to suppress high-frequency parasitic oscillations, should these be present.
In a transmitter, these spurious signals are generated along with the one desired signal. Filtering may be required to ensure the transmitted signal meets regulations for bandwidth and spurious emissions.
Phase noise
When examined with very sensitive equipment, the pure sine-wave peak in a VFO's frequency graph will most likely turn out not to be sitting on a flat noise-floor. Slight random 'jitters' in the signal's timing will mean that the peak is sitting on 'skirts' of phase noise at frequencies either side of the desired one.
These are also troublesome in crowded bands. They allow through unwanted signals that are fairly close to the expected one, but because of the random quality of these phase-noise 'skirts', the signals are usually unintelligible, appearing just as extra noise in the received signal. The effect is that what should be a clean signal in a crowded band can appear to be a very noisy signal, because of the effects of strong signals nearby.
The effect of VFO phase noise on a transmitter is that random noise is actually transmitted either side of the required signal. Again, this must be avoided for legal reasons in many cases.
Frequency reference
Digital or digitally controlled oscillators typically rely on constant single frequency references, which can be made to a higher standard than semiconductor and LC circuit-based alternatives. Most commonly a quartz crystal based oscillator is used, although in high accuracy applications such as TDMA cellular networks, atomic clocks such as the Rubidium standard are as of 2018 also common.
Because of the stability of the reference used, digital oscillators themselves tend to be more stable and more repeatable in the long term. This in part explains their huge popularity in low-cost and computer-controlled VFOs. In the shorter term the imperfections introduced by digital frequency division and multiplication (jitter), and the susceptibility of the common quartz standard to acoustic shocks, temperature variation, aging, and even radiation, limit the applicability of a naïve digital oscillator.
This is why higher end VFO's like RF transmitters locked to atomic time, tend to combine multiple different references, and in complex ways. Some references like rubidium or cesium clocks provide higher long term stability, while others like hydrogen masers yield lower short term phase noise. Then lower frequency (and so lower cost) oscillators phase locked to a digitally divided version of the master clock deliver the eventual VFO output, smoothing out the noise induced by the division algorithms. Such an arrangement can then give all of the longer term stability and repeatability of an exact reference, the benefits of exact digital frequency selection, and the short term stability, imparted even onto an arbitrary frequency analogue waveform—the best of all worlds.
| Technology | Functional circuits | null |
436251 | https://en.wikipedia.org/wiki/Bactrian%20camel | Bactrian camel | The Bactrian camel (Camelus bactrianus), also known as the Mongolian camel, domestic Bactrian camel or two-humped camel, is a large camel native to the steppes of Central Asia. It has two humps on its back, in contrast to the single-humped dromedary. Its population of 2 million exists mainly in the domesticated form. Their name comes from the ancient historical region of Bactria.
Domesticated Bactrian camels have served as pack animals in inner Asia since ancient times. With its tolerance for cold, drought, and high altitudes, it enabled the travel of caravans on the Silk Road. Bactrian camels, whether domesticated or feral, are a separate species from the wild Bactrian camel, which is the only truly wild (as opposed to feral) species of camelid in the Old World. Domestic Bactrian camels do not descend from wild Bactrian camels, with the two species having split around 1 million years ago.
Taxonomy
The Bactrian camel shares the genus Camelus with the dromedary (C. dromedarius) and the wild Bactrian camel (C. ferus). The Bactrian camel belongs to the family Camelidae. The ancient Greek philosopher Aristotle was the first European to describe the camels: in his 4th century BCE History of Animals, he identified the one-humped Arabian camel and the two-humped Bactrian camel. The Bactrian camel was given its current binomial name Camelus bactrianus by Swedish zoologist Carl Linnaeus in his 1758 publication Systema Naturae.
Though sharing a closer common ancestor with it than with the dromedary, the domestic Bactrian camel does not descend from the wild Bactrian camel, with the two species having diverged hundreds of thousands of years ago, with their mitochondrial genomes estimated to have diverged around 1 million years ago. Genetic evidence suggests that both Bactrian camel species are closely related to the extinct giant camel species Camelus knoblochi which became extinct around 20,000 years ago, which is equidistant from both living Bactrian camel species.
The Bactrian camel and the dromedary often interbreed to produce fertile offspring. Where the ranges of the two species overlap, such as in northern Punjab, Iran and Afghanistan, the phenotypic differences between them tend to decrease as a result of extensive crossbreeding between them. The fertility of their hybrid has given rise to speculation that the Bactrian camel and the dromedary should be merged into a single species with two varieties. However, a 1994 analysis of the mitochondrial cytochrome b gene revealed that the species display 10.3% divergence in their sequences.
Description
The Bactrian camel is the largest mammal in its native range and is the largest living camel while being shorter at the shoulder than the dromedary. Shoulder height is from with the overall height ranging from , head-and-body length is , and the tail length is . At the top of the humps, the average height is .
Body mass can range from , with males weighing around , and females around . Its long, wooly coat varies in colour from dark brown to sandy beige. A mane and beard of long hair occurs on the neck and throat, with hairs measuring up to long.
The shaggy winter coat is shed extremely rapidly, with huge sections peeling off at once, appearing as if sloppily shorn. The two humps on the back are composed of fat (not water as is sometimes thought). The face is typical of a camelid, being long and somewhat triangular, with a split upper lip. The long eyelashes, along with the sealable nostrils, help to keep out dust in the frequent sandstorms which occur in their natural range. The two broad toes on each foot have undivided soles and are able to spread widely as an adaptation to walking on sand. The feet are very tough, as befits an animal of extreme environments.
Natural habitat
These camels are migratory, and their habitat ranges from rocky mountain massifs to flat steppe, arid desert, (mostly the Gobi Desert), stony plains and sand dunes. Conditions are extremely harsh – vegetation is sparse, water sources are limited and temperatures are extreme. The coat of the Bactrian camel can withstand cold as low as in winter to in summer. The camels' distribution is linked to the availability of water, with large groups congregating near rivers after rain or at the foot of the mountains, where water can be obtained from springs in the summer months, and in the form of snow during the winter.
Life history
Bactrian camels are exceptionally adept at withstanding wide variations in temperature, ranging from freezing cold to blistering heat. They have a remarkable ability to go without water for months at a time, but when water is available they may drink up to 57 liters at once. When well fed, the humps are plump and erect, but as resources decline, the humps shrink and lean to the side. When moving faster than a walking speed, they pace, by stepping forwards with both legs on the same side (as opposed to trotting, using alternate diagonals as done by most other quadrupeds). Speeds of up to have been recorded, but they rarely move this fast. Bactrian camels are also said to be good swimmers. The sense of sight is well developed and the sense of smell is extremely good. The lifespan of Bactrian camels is estimated at up to 50 years, more often 20 to 40 in captivity.
Diet
Bactrian camels are diurnal, sleeping in the open at night and foraging for food during the day. They are primarily herbivorous. With tough mouths that can withstand sharp objects such as thorns, they are able to eat plants that are dry, prickly, salty or bitter, and can ingest virtually any kind of vegetation. When other nutrient sources are not available, these camels may feed on carcasses, gnawing on bones, skin, or various different kinds of flesh. In more extreme conditions, they may eat any material they find, which has included rope, sandals, and even tents. Their ability to feed on a wide range of foods allows them to live in areas with sparse vegetation. The first time food is swallowed, it is not fully chewed. The partly masticated food (called cud) goes into the stomach and later is brought back up for further chewing.
Bactrian camels belong to a fairly small group of animals that regularly eat snow to provide their water needs. Animals living above the snowline may have to do this, as snow and ice can be the only forms of water during winter, and by doing so, their range is greatly enlarged. The latent heat of snow and ice is big compared with the heat capacity of water, forcing animals to eat only small amounts at a time.
Reproduction
Bactrian camels are induced ovulators – they ovulate after insemination (insertion of semen into the vagina); the seminal plasma, not the spermatozoa, induces ovulation. Ovulation occurs in 87% of females after insemination: 66% ovulate within 36 hours and the rest by 48 hours (the same as natural mating). The least amount of semen required to elicit ovulation is about 1.0 ml.
Males during mating time are often quite violent and may bite, spit, or attempt to sit on other male camels. The age of sexual maturity varies, but is usually reached at 3–5 years. Gestation lasts around 13 months. One or occasionally two calves are produced, and the female can give birth to a new calf every other year. Young Bactrian camels are precocial, being able to stand and run shortly after birth, and are fairly large at an average birth weight of . They are nursed for about 1.5 years. The young calf stays with its mother for three to five years, until it reaches sexual maturity, and often helps raise subsequent generations for those years. Wild camels sometimes breed with domesticated or feral camels.
Genome
The provides a C. bactrianus ferus genome using next generation sequencing.
Several effective population size studies have been carried out. They show several bottlenecks in both wild and domesticated Bactrians over the past 350,000 years.
Relationship to humans
The Bactrian camel was domesticated circa ~4,500 BCE. The dromedary is believed to have been domesticated between 4000 BCE and 2000 BCE in Arabia. As pack animals, these ungulates are virtually unsurpassed, able to carry at a rate of per day, or over a period of four days. The species was a mainstay of transportation on the Silk Road. Furthermore, Bactrian camels are frequently ridden, especially in desertified areas. In ancient Sindh, for example, Bactrian camels of two humps were initially used by the rich for riding. The camel was later brought to other areas such as Balochistan and Iran for the same purpose.
Bactrian camels have been the focus of artwork throughout history. For example, westerners from the Tarim Basin and elsewhere were depicted in numerous ceramic figurines of the Chinese Tang dynasty (618–907).
United States imports
Bactrian camels were imported to the U.S. several times in the mid- to late 19th century, both by the U.S. military and by merchants and miners, looking for pack animals sturdier and hardier than horses and mules. Although the camels met these needs, the United States Camel Corps was never considered much of a success. Having brought two shipments of fewer than 100 camels to the U.S., plans were made to import another 1,000, but the US Civil War interrupted this. Most surviving camels of these endeavors, both military and private, were merely turned loose to survive in the wild. As a result, small feral herds of Bactrian camels existed during the late 19th century in the southwest deserts of the United States.
Documentaries
The Story of the Weeping Camel is a 2003 Mongolian documentary/story about a family of nomadic shepherds trying to get a white calf accepted by his mother, which rejected him after a difficult birth.
Military use
The Indian Army uses these camels to patrol in Ladakh. It was concluded that after carrying out trials and doing a comparative study with a single-humped camel brought from Rajasthan that the double-humped camel is better suited for the task at hand. Colonel Manoj Batra, a veterinary officer of the Indian Army, stated that the double-humped camel "are best suited for these conditions. They can carry loads of at more than which is much more than the ponies that are being used as of now. They can survive without water for at least 72 hours."
Gallery
| Biology and health sciences | Artiodactyla | null |
436303 | https://en.wikipedia.org/wiki/Cownose%20ray | Cownose ray | The cownose ray (Rhinoptera bonasus) is a species of Batoidea found throughout a large part of the western Atlantic and Caribbean, from New England to southern Brazil (the East Atlantic populations are now generally considered a separate species, the Lusitanian cownose ray (R. marginata)). These rays also belong to the order Myliobatiformes, a group that is shared by bat rays, manta rays, and eagle rays.
Cownose rays prefer to live in shallower, coastal waters or estuaries. Size, lifespan, and maturity differ between male and female rays. Rays have a distinct shape, and it has two lobes at the front of its head, resembling a cow nose. Cownose rays can live between 16 and 21 years, depending on sex. Rays feed upon organisms with harder shells, such as clams, crustaceans, or mollusks. They are migratory creatures, where they migrate South in the winter and North in the summer. The rays are known to occupy the Chesapeake Bay in the summer months.
In 2019, the species was listed as vulnerable on the IUCN Red List. The species has been subjected to overfishing due to the perceived threat of overpopulation in the Chesapeake Bay. There are not many conservation strategies or efforts for cownose rays.
Taxonomy
The genus name Rhinoptera is named for the Ancient Greek words for nose () and wing (). The species name bonasus comes from the Ancient Greek for bison ().
Description
A cownose ray is typically brown-backed with a slightly white or yellow belly. Although its coloration is not particularly distinctive, its shape is easily recognizable. It has a broad head with wide-set eyes, and a pair of distinctive lobes on its subrostral fin. It also has a set of dental plates designed for crushing clams and oyster shells. Male rays often reach about in width, while females typically reach about in width. The cownose ray is often mistaken for being a shark by beach-goers due to the tips of the rays fins sticking out of the water, often resembling the dorsal fin of a shark.
When threatened the cownose ray can use the barb at the base of its tail to defend itself from the threat. A cownose ray has a spine with a toxin, close to the ray's body. This spine has teeth lining its lateral edges, and is coated with a weak venom that causes symptoms similar to that of a bee sting.
Habitat and distribution
Cownose rays are migratory and social creatures and reside on the east coast of the United States, Brazil, as well as in the Gulf of Mexico. They prefer to live in near coastal waters and in estuarian ecosystems. Cownose rays are able to tolerate a wide range of salinities because of the areas they occupy. This allows for the rays to have the potential to live in a wider range of habitats if one area gets too crowded and competition for resources is high. Cownose rays are known to be abundant in the Chesapeake Bay and migrate to the area for mating and nursery purposes, typically in the late spring and summer time. Rays are typically spotted near the surface of waters as well.
Behavior
Diet and feeding
The cownose ray exhibits a durophagous diet, meaning it feeds upon hard-shelled organisms, such as mollusks, crustaceans, but they prefer scallops or clams, which have softer shells and are categorized as bivalves. The cownose ray tends to feed either in the early morning hours or in the late afternoon hours, when the waves are calm and visibility is higher than during the day. Their feeding occurs in the benthic zone, or at the bottom of the ocean.
The rays are able to capture their prey through suction and the opening and closing of their jaw. Because of the type of prey cownose rays consume, their jaw needs to be able to handle the hard-shell organisms. Their jaws are extremely robust and have teeth with a hardness comparable to that of cement. Their cephalic lobes also assist with capturing and handling their prey by pushing them towards their mouth.
Predation
The cownose ray sits fairly high up on the food chain, and as a result only has a few natural predators. These predators include; cobia, hammerhead sharks, and humans who like to fish for them.
Reproduction and lifespan
Cownose rays breed from April through October. Rays will not reach a mature age until they are roughly 70% of the way to their maximum size. Females reach maturity between ages 7–8, while males reach maturity between ages 6–7. The lifespan of the cownose ray varies by sex; the oldest female ray that has been recorded was 21, and the oldest male ray was 18, which were both observed in the Chesapeake Bay.
Cownose rays are ovoviviparous, meaning that the embryo grows within its mother until it is ready to hatch. Rays have a longer gestation period due to their K-selected species attributes. The length of gestation is believed to last between 11 and 12 months, and at full term, the offspring are born live, exiting tail first.
Migration
Rays often travel and migrate in large schools based on size and sex. Their migration pattern consists of rays moving north in late Spring and moving south in late Fall. Much of what we know about their migration has been from studies done in the Chesapeake Bay. Male and female rays will come to the Bay in the late spring and leave in the fall. While occupying the Chesapeake Bay, the female rays and her pups will live in the estuarine waters. Males have been observed leaving the Bay earlier than the females to arrive at a second feeding ground, and the reason for taking a longer migration route is not fully known. One hypothesis is that males exit the Bay to reduce competition of certain resources, such as food and shelter.
Threats and conservation
The cownose ray is currently listed as vulnerable by the IUCN Red List due to extensive overfishing and commercial fishing. The overfishing is due to the perception that rays destroy oyster beds meant for the shellfish industry.
The trophic cascade in the northwest Atlantic Ocean has been cited and used to link cownose ray overpopulation to the decrease in large coastal sharks, which therefore cause bivalves populations valuable for commercial reasons to be depleted; however, there is little evidence that supports this hypothesis. Campaigns such as "Save the Bay, Eat a Ray" in the Chesapeake Bay used these claims to promote the fishery of these rays in hopes of preserving the Bay, which can be detrimental to this species. Cownose rays reach a mature age later in their lifecycle and long gestation periods, meaning that they are a K-selected species. This suggests that they are vulnerable and sensitive to overfishing, and their populations cannot easily bounce back after these events. Even though rays have been used as a scapegoat to explain the decline in bivalves, some studies have found that cownose rays do not consume a great deal of oysters or clams. Other studies have found that much of the shellfish prey that the cownose ray consumes is influenced by the size of the shell, so it has been suggested that oyster growers protect their shellfish until their shell reaches a certain size.
There are not many conservation strategies or efforts for cownose rays, besides the fact that cownose ray killing contests have been banned in the state of Maryland.
Relationship to humans
Risk to humans
Stingrays, including the cownose ray, can pose a low to moderate risk to humans. Rays will lash their tails when threatened, posing a risk of being whipped. If threatened, the cownose ray can also use their barb as a weapon to sting the aggressor. A sting from a cownose ray can cause a very painful wound that requires medical attention once stung. While the sting is not usually fatal, it can be fatal if stung in the abdomen. There is also a risk associated with eating meat from the sea animal that has not been prepared correctly. Shigella may be acquired from eating meat from a cownose ray that has been contaminated with the bacteria. This bacteria causes shigellosis, and can result in dysentery. Symptoms can include diarrhea, pain, fever, and possible dehydration.
Aquariums
Cownose rays can be seen in many public aquaria worldwide and are often featured in special 'touch tanks' where visitors can reach into a wide but shallow pool containing the fish, which have often had their barbs pinched or taken off (they eventually regrow, similar to human nails), making them safe enough to touch.
The following aquariums and zoos are known to have touch tanks featuring cownose rays (alone or with other fish):
US
Adventure Aquarium in Camden, New Jersey
Albuquerque Aquarium in Albuquerque, New Mexico
Audubon Aquarium in New Orleans, Louisiana
Aquarium of Boise in Boise, Idaho
Arizona-Sonora Desert Museum in Tucson, Arizona
Atlantic City Aquarium in Atlantic City, New Jersey
Aquarium of the Pacific in Long Beach, California
Butterfly House and Aquarium in Sioux Falls, South Dakota
Brookfield Zoo in Chicago, Illinois
California Academy of Sciences in San Francisco, California
Calvert Marine Museum in Solomons, Maryland
Children's Aquarium at Fair Park in Dallas, Texas
Clearwater Marine Aquarium in Clearwater, Florida
Columbus Zoo and Aquarium in Powell, Ohio
Downtown Aquarium, Denver in Denver, Colorado
The Florida Aquarium in Tampa, Florida
Fort Wayne Children's Zoo in Fort Wayne, Indiana
Fresno Chaffee Zoo in Fresno, California
Georgia Aquarium in Atlanta, Georgia
Gulf World Marine Park in Panama City Beach, Florida
Henry Doorly Zoo in Omaha, Nebraska
IMAG History & Science Center in Fort Myers, Florida
Indianapolis Zoo in Indianapolis, Indiana
Jacksonville Zoo and Gardens in Jacksonville, Florida
Kansas City Zoo in Kansas City, Missouri
Living Shores Aquarium in Glen, New Hampshire
Long Island Aquarium and Exhibition Center in Riverhead, New York
Lowry Park Zoo in Tampa, Florida
Marine Science Center in Ponce Inlet, Florida
Maritime Aquarium in Norwalk, Connecticut
Memphis Zoo and Aquarium in Memphis, Tennessee
Mississippi Aquarium in Gulfport, Mississippi
Mote Marine Laboratory in Sarasota, Florida
Mystic Aquarium in Mystic, Connecticut
National Mississippi River Museum & Aquarium in Dubuque, Iowa
The New England Aquarium in Boston, Massachusetts
New York Aquarium in Brooklyn, New York
Newport Aquarium in Newport, Kentucky
North Carolina Aquarium at Pine Knoll Shores in Emerald Isle, North Carolina
Ocean Adventures in Gulfport, Mississippi
OdySea Aquarium in Scottsdale, Arizona
Oklahoma City Zoo and Botanical Garden in Oklahoma City, Oklahoma
Phoenix Zoo in Phoenix, Arizona
Rooster Cogburn Ostrich Ranch in Picacho, Arizona
Ripley's Aquarium of Myrtle Beach in Myrtle Beach, South Carolina
Ripley’s Aquarium of the Smokies in Gatlinburg, Tennessee
Saint Louis Zoo in St. Louis, Missouri
San Antonio Aquarium in San Antonio, Texas
SeaWorld Orlando in Orlando, Florida
Shedd Aquarium in Chicago, Illinois
Shreveport Aquarium in Shreveport, Louisiana
South Carolina Aquarium, Charleston South Carolina
Tennessee Aquarium in Chattanooga, Tennessee
Texas State Aquarium in Corpus Christi, Texas
Toledo Zoo in Toledo, Ohio
Tropicana Field in St. Petersburg, Florida (The Rays Touch Tank)
Turtle Back Zoo in West Orange, New Jersey
Wonders of Wildlife Museum & Aquarium in Springfield, Missouri
ViaAquarium in Rotterdam, New York
Virginia Aquarium in Virginia Beach, Virginia
Greensboro Science Center in Greensboro, North Carolina
Canada
Aquarium of Quebec in Quebec City
Granby Zoo in Granby
Ripley's Aquarium of Canada in Toronto, Ontario
The Vancouver Aquarium in Vancouver, British Columbia
Assiniboine Park Zoo in Winnipeg, Manitoba
| Biology and health sciences | Batoidea | Animals |
436418 | https://en.wikipedia.org/wiki/Compressed-air%20energy%20storage | Compressed-air energy storage | Compressed-air-energy storage (CAES) is a way to store energy for later use using compressed air. At a utility scale, energy generated during periods of low demand can be released during peak load periods.
The first utility-scale CAES project was in the Huntorf power plant in Elsfleth, Germany, and is still operational . The Huntorf plant was initially developed as a load balancer for fossil-fuel-generated electricity, but the global shift towards renewable energy renewed interest in CAES systems, to help highly intermittent energy sources like photovoltaics and wind satisfy fluctuating electricity demands.
One ongoing challenge in large-scale design is the management of thermal energy, since the compression of air leads to an unwanted temperature increase that not only reduces operational efficiency but can also lead to damage. The main difference between various architectures lies in thermal engineering. On the other hand, small-scale systems have long been used for propulsion of mine locomotives. Contrasted with traditional batteries, systems can store energy for longer periods of time and have less upkeep.
Types
Compression of air creates heat; the air is warmer after compression. Expansion removes heat. If no extra heat is added, the air will be much colder after expansion. If the heat generated during compression can be stored and used during expansion, then the efficiency of the storage improves considerably. There are several ways in which a CAES system can deal with heat. Air storage can be adiabatic, diabatic, isothermal, or near-isothermal.
Adiabatic
Adiabatic storage continues to store the energy produced by compression and returns it to the air as it is expanded to generate power. This is a subject of an ongoing study, with no utility-scale plants as of 2015. The theoretical efficiency of adiabatic storage approaches 100% with perfect insulation, but in practice, round trip efficiency is expected to be 70%. Heat can be stored in a solid such as concrete or stone, or in a fluid such as hot oil (up to 300 °C) or molten salt solutions (600 °C). Storing the heat in hot water may yield an efficiency around 65%.
Packed beds have been proposed as thermal storage units for adiabatic systems. A study numerically simulated an adiabatic compressed air energy storage system using packed bed thermal energy storage. The efficiency of the simulated system under continuous operation was calculated to be between 70.5% and 71%.
Advancements in adiabatic CAES involve the development of high-efficiency thermal energy storage systems that capture and reuse the heat generated during compression. This innovation has led to system efficiencies exceeding 70%, significantly higher than traditional Diabatic systems.
Diabatic
Diabatic storage dissipates much of the heat of compression with intercoolers (thus approaching isothermal compression) into the atmosphere as waste, essentially wasting the energy used to perform the work of compression. Upon removal from storage, the temperature of this compressed air is the one indicator of the amount of stored energy that remains in this air. Consequently, if the air temperature is too low for the energy recovery process, then the air must be substantially re-heated prior to expansion in the turbine to power a generator. This reheating can be accomplished with a natural-gas-fired burner for utility-grade storage or with a heated metal mass. As recovery is often most needed when renewable sources are quiescent, the fuel must be burned to make up for the wasted heat. This degrades the efficiency of the storage-recovery cycle. While this approach is relatively simple, the burning of fuel adds to the cost of the recovered electrical energy and compromises the ecological benefits associated with most renewable energy sources. Nevertheless, this is thus far the only system that has been implemented commercially.
The McIntosh, Alabama, CAES plant requires 2.5 MJ of electricity and 1.2 MJ lower heating value (LHV) of gas for each MJ of energy output, corresponding to an energy recovery efficiency of about 27%. A General Electric 7FA 2x1 combined cycle plant, one of the most efficient natural gas plants in operation, uses 1.85 MJ (LHV) of gas per MJ generated, a 54% thermal efficiency.
To improve the efficiency of Diabatic CAES systems, modern designs incorporate heat recovery units that capture waste heat during compression, thereby reducing energy losses and enhancing overall performance.
Isothermal
Isothermal compression and expansion approaches attempt to maintain operating temperature by constant heat exchange to the environment. In a reciprocating compressor, this can be achieved by using a finned piston and low cycle speeds. Current challenges in effective heat exchangers mean that they are only practical for low power levels. The theoretical efficiency of isothermal energy storage approaches 100% for perfect heat transfer to the environment. In practice, neither of these perfect thermodynamic cycles is obtainable, as some heat losses are unavoidable, leading to a near-isothermal process. Recent developments in isothermal CAES focus on advanced thermal management techniques and materials that maintain constant air temperatures during compression and expansion, minimizing energy losses and improving system efficiency.
Near-isothermal
Near-isothermal compression (and expansion) is a process in which a gas is compressed in very close proximity to a large incompressible thermal mass such as a heat-absorbing and -releasing structure (HARS) or a water spray. A HARS is usually made up of a series of parallel fins. As the gas is compressed, the heat of compression is rapidly transferred to the thermal mass, so the gas temperature is stabilized. An external cooling circuit is then used to maintain the temperature of the thermal mass. The isothermal efficiency (Z) is a measure of where the process lies between an adiabatic and isothermal process. If the efficiency is 0%, then it is totally adiabatic; with an efficiency of 100%, it is totally isothermal. Typically with a near-isothermal process, an isothermal efficiency of 90–95% can be expected.
Hybrid CAES systems
Hybrid Compressed Air Energy Storage (H-CAES) systems integrate renewable energy sources, such as wind or solar power, with traditional CAES technology. This integration allows for the storage of excess renewable energy generated during periods of low demand, which can be released during peak demand to enhance grid stability and reduce reliance on fossil fuels. For instance, the Apex CAES Plant in Texas combines wind energy with CAES to provide a consistent energy output, addressing the intermittency of renewable energy sources.
Other
One implementation of isothermal CAES uses high-, medium-, and low-pressure pistons in series. Each stage is followed by an airblast venturi pump that draws ambient air over an air-to-air (or air-to-seawater) heat exchanger between each expansion stage. Early compressed-air torpedo designs used a similar approach, substituting seawater for air. The venturi warms the exhaust of the preceding stage and admits this preheated air to the following stage. This approach was widely adopted in various compressed-air vehicles such as H. K. Porter, Inc.'s mining locomotives and trams. Here, the heat of compression is effectively stored in the atmosphere (or sea) and returned later on.
Compressors and expanders
Compression can be done with electrically-powered turbo-compressors and expansion with turbo-expanders or air engines driving electrical generators to produce electricity.
Storage
Air storage vessels vary in the thermodynamic conditions of the storage and on the technology used:
Constant volume storage (solution-mined caverns, above-ground vessels, aquifers, automotive applications, etc.)
Constant pressure storage (underwater pressure vessels, hybrid pumped hydro / compressed air storage)
Constant-volume storage
This storage system uses a chamber with specific boundaries to store large amounts of air. This means from a thermodynamic point of view that this system is a constant-volume and variable-pressure system. This causes some operational problems for the compressors and turbines, so the pressure variations have to be kept below a certain limit, as do the stresses induced on the storage vessels.
The storage vessel is often a cavern created by solution mining (salt is dissolved in water for extraction) or by using an abandoned mine; use of porous and permeable rock formations (rocks that have interconnected holes, through which liquid or air can pass), such as those in which reservoirs of natural gas are found, has also been studied.
In some cases, an above-ground pipeline was tested as a storage system, giving some good results. Obviously, the cost of the system is higher, but it can be placed wherever the designer chooses, whereas an underground system needs some particular geologic formations (salt domes, aquifers, depleted gas fields, etc.).
Constant-pressure storage
In this case, the storage vessel is kept at constant pressure, while the gas is contained in a variable-volume vessel. Many types of storage vessels have been proposed, generally relying on liquid displacement to achieve isobaric operation. In such cases, the storage vessel is positioned hundreds of meters below ground level, and the hydrostatic pressure (head) of the water column above the storage vessel maintains the pressure at the desired level.
This configuration allows:
Improvement of the energy density of the storage system because all the air contained can be used (the pressure is constant in all charge conditions, full or empty, so the turbine has no problem exploiting it, while with constant-volume systems, if the pressure goes below a safety limit, then the system needs to stop).
Removal of the requirement of throttling prior to the expansion.
Avoidance of mixing of heat at different temperatures in the Thermal Energy Storage system, which leads to irreversibility.
Improvement of the efficiency of the turbomachinery, which will work under constant-inlet conditions.
Use of various geographic locations for the positioning of the CAES plant (coastal lines, floating platforms, etc.).
On the other hand, the cost of this storage system is higher due to the need to position the storage vessel on the bottom of the chosen water reservoir (often the ocean) and due to the cost of the vessel itself.
A different approach consists of burying a large bag buried under several meters of sand instead of water.
Plants operate on a peak-shaving daily cycle, charging at night and discharging during the day. Heating the compressed air using natural gas or geothermal heat to increase the amount of energy being extracted has been studied by the Pacific Northwest National Laboratory.
Compressed-air energy storage can also be employed on a smaller scale, such as exploited by air cars and air-driven locomotives, and can use high-strength (e.g., carbon-fiber) air-storage tanks. In order to retain the energy stored in compressed air, this tank should be thermally isolated from the environment; otherwise, the energy stored will escape in the form of heat, because compressing air raises its temperature.
Environmental Impact
CAES systems are often considered an environmentally friendly alternative to other large-scale energy storage technologies due to their reliance on naturally occurring resources, such as salt caverns for air storage and ambient air as the working medium. Unlike lithium-ion batteries, which require the extraction of finite resources such as lithium and cobalt, CAES has a minimal environmental footprint during its lifecycle.
However, the construction of CAES facilities presents unique challenges. Underground air storage requires geological formations such as salt domes, which are geographically limited. Inappropriate siting or mismanagement during construction can lead to disruptions in local ecosystems, land subsidence, or groundwater contamination.
On the positive side, CAES systems integrated with renewable energy sources contribute to a significant reduction in greenhouse gas emissions by enabling the storage and dispatch of clean energy during peak demand. Additionally, repurposing depleted natural gas fields or other geological formations for air storage can mitigate environmental impacts and extend the usefulness of existing infrastructure.
Economic Considerations
The cost of implementing CAES systems depends heavily on the geological conditions of the site, the scale of the facility, and the type of CAES process used (adiabatic, diabatic, or isothermal). Initial capital expenditures are significant, often ranging from $500 to $1,200 per kW for large-scale systems. These costs primarily include the development of underground storage caverns, compression and expansion equipment, and thermal energy storage units (for advanced systems).
Despite the high upfront costs, CAES facilities have long operational lifespans, often exceeding 30 years, with low maintenance and operational costs compared to lithium-ion battery storage systems, which require periodic replacements. This long-term cost efficiency makes CAES particularly attractive for electric utility companies and grid operators.
Policy and Regulation
Market trends suggest growing interest in CAES technology due to increasing renewable energy integration and the need for grid-scale energy storage. Government incentives and declining costs of advanced components, such as high-efficiency compressors and turbines, are further enhancing the economic feasibility of CAES.
Government policies and regulatory frameworks are critical in determining the pace of CAES adoption and development. Countries like Germany and the United States have implemented various incentives, including tax credits and grants, to promote energy storage technologies. For instance, the U.S. Department of Energy’s Energy Storage Grand Challengeincludes CAES as a key focus area for research and development funding.
One of the significant regulatory hurdles for CAES is the permitting process for underground air storage facilities. Environmental impact assessments, land use approvals, and safety standards for high-pressure storage systems can delay or increase costs for CAES projects. For example, projects sited near urban areas often face additional scrutiny due to concerns about noise pollution, air quality, and potential risks associated with high-pressure air storage.
Internationally, efforts are underway to standardize the design, operation, and safety protocols for CAES systems. Organizations like the International Energy Agency (IEA) and regional bodies such as the European Union have been instrumental in developing frameworks to support the integration of CAES into modern energy grids. As renewable energy adoption accelerates, policies aimed at addressing intermittency challenges will likely prioritize grid-scale solutions like CAES.
History
Citywide compressed air energy systems for delivering mechanical power directly via compressed air have been built since 1870. Cities such as Paris, France; Birmingham, England; Dresden, Rixdorf, and Offenbach, Germany; and Buenos Aires, Argentina, installed such systems. Victor Popp constructed the first systems to power clocks by sending a pulse of air every minute to change their pointer arms. They quickly evolved to deliver power to homes and industries. As of 1896, the Paris system had 2.2 MW of generation distributed at 550 kPa in 50 km of air pipes for motors in light and heavy industry. Usage was measured in cubic meters. The systems were the main source of house-delivered energy in those days and also powered the machines of dentists, seamstresses, printing facilities, and bakeries.
The first utility-scale diabatic compressed-air energy storage project was the 290-megawatt Huntorf plant opened in 1978 in Germany using a salt dome cavern with a capacity of and a 42% efficiency.
A plant that could store up to (and produce up to for 26 hours) was built in McIntosh, Alabama in 1991. The Alabama facility's $65 million cost equals $590 per kW of power capacity and about $23 per kW⋅h of storage capacity. It uses a solution-mined salt cavern to store air at up to . Although the compression phase is approximately 82% efficient, the expansion phase requires the combustion of natural gas at one-third the rate of a gas turbine producing the same amount of electricity at 54% efficiency.
In 2012, General Compression completed construction of a two-megawatt near-isothermal project in Gaines County, Texas, the world's third such project. The project uses no fuel. It appears to have stopped operating in 2016.
A 60MW, 300MW⋅h facility with 60% efficiency opened in Jiangsu, China, using a salt cavern (2022).
A 2.5MW, 4MW⋅h compressed closed-cycle facility started operating in Sardinia, Italy (2022).
In 2022, Zhangjiakou connected the world's first 100MW storage system to the grid in north China. It uses supercritical thermal storage, supercritical heat exchange, and high-load compression and expansion technologies. The plant can store 400MW⋅h with 70.4% efficiency. Construction of a 350MW, 1.4GW⋅h salt cave project started in Shangdong at a cost of $208 million, operating in 2024 with 64% efficiency, and construction of a four-hour, 700MW, 2.8GW⋅h facility started in China in 2024.
Largest CAES facilities
Projects
In 2009, the US Department of Energy awarded $24.9 million in matching funds for phase one of a 300MW, $356 million Pacific Gas and Electric Company installation using a saline porous rock formation being developed near Bakersfield in Kern County, California. The goals of the project were to build and validate an advanced design.
In 2010, the US Department of Energy provided $29.4 million in funding to conduct preliminary work on a 150-MW salt-based project being developed by Iberdrola USA in Watkins Glen, New York. The goal is to incorporate smart grid technology to balance renewable intermittent energy sources.
The first adiabatic project, a 200-megawatt facility called ADELE, was planned for construction in Germany (2013) with a target of 70% efficiency by using air at 100 bars of pressure. This project was delayed for undisclosed reasons until at least 2016.
Storelectric Ltd planned to build a 40-MW 100% renewable energy pilot plant in Cheshire, UK, with 800 MWh of storage capacity (2017).
Hydrostor completed the first commercial A-CAES system in Goderich, Ontario, supplying service with 2.2MW / 10MWh storage to the Ontario Grid (2019). It was the first A-CAES system to achieve commercial operation in decades.
The European-Union-funded RICAS (adiabatic) project in Austria was to use crushed rock to store heat from the compression process to improve efficiency (2020). The system was expected to achieve 70–80% efficiency.
Apex planned a plant for Anderson County, Texas, to go online in 2016. This project has been delayed until at least 2020.
Canadian company Hydrostor planned to build four Advance plants in Toronto, Goderich, Angas, and Rosamond (2020). Some included partial heat storage in water, improving efficiency to 65%.
As of 2022, the Gem project at Rosamond in Kern County, California, was planned to provide 500 MW / 4,000 MWh of storage. The Pecho project in San Luis Obispo, California, was planned to be 400 MW / 3,200 MWh. The Broken Hill project in New South Wales, Australia was 200 MW / 1,600 MWh.
In 2023, Alliant Energy announced plans to construct a 200-MWh compressed facility based on the Sardinia facility in Columbia County, Wisconsin. It will be the first of its kind in the United States.
Compressed air energy storage may be stored in undersea caves in Northern Ireland.
Storage thermodynamics
In order to achieve a near-thermodynamically-reversible process so that most of the energy is saved in the system and can be retrieved, and losses are kept negligible, a near-reversible isothermal process or an isentropic process is desired.
Isothermal storage
In an isothermal compression process, the gas in the system is kept at a constant temperature throughout. This necessarily requires an exchange of heat with the gas; otherwise, the temperature would rise during charging and drop during discharge. This heat exchange can be achieved by heat exchangers (intercooling) between subsequent stages in the compressor, regulator, and tank. To avoid wasted energy, the intercoolers must be optimized for high heat transfer and low pressure drop. Smaller compressors can approximate isothermal compression even without intercooling, due to the relatively high ratio of surface area to volume of the compression chamber and the resulting improvement in heat dissipation from the compressor body itself.
When one obtains perfect isothermal storage (and discharge), the process is said to be "reversible". This requires that the heat transfer between the surroundings and the gas occur over an infinitesimally small temperature difference. In that case, there is no exergy loss in the heat transfer process, and so the compression work can be completely recovered as expansion work: 100% storage efficiency. However, in practice, there is always a temperature difference in any heat transfer process, and so all practical energy storage obtains efficiencies lower than 100%.
To estimate the compression/expansion work in an isothermal process, it may be assumed that the compressed air obeys the ideal gas law:
For a process from an initial state A to a final state B, with absolute temperature constant, one finds the work required for compression (negative) or done by the expansion (positive) to be
where , and so .
Here is the absolute pressure, is the (unknown) volume of gas compressed, is the volume of the vessel, is the amount of substance of gas (mol), and is the ideal gas constant.
If there is a constant pressure outside of the vessel, which is equal to the starting pressure , the positive work of the outer pressure reduces the exploitable energy (negative value). This adds a term to the equation above:
Example
How much energy can be stored in a 1 m3 storage vessel at a pressure of , if the ambient pressure is ? In this case, the process work is
=
= 7.0 MPa × 1 m3 × ln(0.1 MPa/7.0 MPa) + (7.0 MPa − 0.1 MPa) × 1 m3 = −22.8 MJ.
The negative sign means that work is done on the gas by the surroundings. Process irreversibilities (such as in heat transfer) will result in less energy being recovered from the expansion process than is required for the compression process. If the environment is at a constant temperature, for example, then the thermal resistance in the intercoolers will mean that the compression occurs at a temperature somewhat higher than the ambient temperature, and the expansion will occur at a temperature somewhat lower than the ambient temperature. So a perfect isothermal storage system is impossible to achieve.
Adiabatic (isentropic) storage
An adiabatic process is one where there is no heat transfer between the fluid and the surroundings: the system is insulated against heat transfer. If the process is furthermore internally reversible (frictionless, to the ideal limit), then it will additionally be isentropic.
An adiabatic storage system does away with the intercooling during the compression process and simply allows the gas to heat up during compression and likewise cool down during expansion. This is attractive since the energy losses associated with the heat transfer are avoided, but the downside is that the storage vessel must be insulated against heat loss. It should also be mentioned that real compressors and turbines are not isentropic, but instead have an isentropic efficiency of around 85%. The result is that round-trip storage efficiency for adiabatic systems is also considerably less than perfect.
Large storage system thermodynamics
Energy storage systems often use large caverns. This is the preferred system design due to the very large volume and thus the large quantity of energy that can be stored with only a small pressure change. The gas is compressed adiabatically with little temperature change (approaching a reversible isothermal system) and heat loss (approaching an isentropic system). This advantage is in addition to the low cost of constructing the gas storage system, using the underground walls to assist in containing the pressure. The cavern space can be insulated to improve efficiency.
Undersea insulated airbags that have similar thermodynamic properties to large cavern storage have been suggested.
Vehicle applications
Practical constraints in transportation
In order to use air storage in vehicles or aircraft for practical land or air transportation, the energy storage system must be compact and lightweight. Energy density and specific energy are the engineering terms that define these desired qualities.
Specific energy, energy density, and efficiency
As explained in the thermodynamics of the gas storage section above, compressing air heats it, and expansion cools it. Therefore, practical air engines require heat exchangers in order to avoid excessively high or low temperatures, and even so do not reach ideal constant-temperature conditions or ideal thermal insulation.
Nevertheless, as stated above, it is useful to describe the maximum energy storable using the isothermal case, which works out to about 100 kJ/m3 [ ln(PA/PB)].
Thus if 1.0 m3 of air from the atmosphere is very slowly compressed into a 5 L bottle at , then the potential energy stored is 530 kJ. A highly efficient air motor can transfer this into kinetic energy if it runs very slowly and manages to expand the air from its initial 20 MPa pressure down to 100 kPa (bottle completely "empty" at atmospheric pressure). Achieving high efficiency is a technical challenge both due to heat loss to the ambient and to unrecoverable internal gas heat. If the bottle above is emptied to 1 MPa, then the extractable energy is about 300 kJ at the motor shaft.
A standard 20-MPa, 5-L steel bottle has a mass of 7.5 kg, and a superior one 5 kg. High-tensile-strength fibers such as carbon fiber or Kevlar can weigh below 2 kg in this size, consistent with the legal safety codes. One cubic meter of air at 20 °C has a mass of 1.204 kg at standard temperature and pressure. Thus, theoretical specific energies are from roughly 70 kJ/kg at the motor shaft for a plain steel bottle to 180 kJ/kg for an advanced fiber-wound one, whereas practical achievable specific energies for the same containers would be from 40 to 100 kJ/kg.
Safety
As with most technologies, compressed air has safety concerns, mainly catastrophic tank rupture. Safety regulations make this a rare occurrence at the cost of higher weight and additional safety features such as pressure relief valves. Regulations may limit the legal working pressure to less than 40% of the rupture pressure for steel bottles (for a safety factor of 2.5) and less than 20% for fiber-wound bottles (safety factor 5). Commercial designs adopt the ISO 11439 standard. High-pressure bottles are fairly strong so that they generally do not rupture in vehicle crashes.
Comparison with batteries
Advanced fiber-reinforced bottles are comparable to the rechargeable lead–acid battery in terms of energy density. Batteries provide nearly-constant voltage over their entire charge level, whereas the pressure varies greatly while using a pressure vessel from full to empty. It is technically challenging to design air engines to maintain high efficiency and sufficient power over a wide range of pressures. Compressed air can transfer power at very high flux rates, which meets the principal acceleration and deceleration objectives of transportation systems, particularly for hybrid vehicles.
Compressed air systems have advantages over conventional batteries, including longer lifetimes of pressure vessels and lower material toxicity. Newer battery designs such as those based on lithium iron phosphate chemistry suffer from neither of these problems. Compressed air costs are potentially lower; however, advanced pressure vessels are costly to develop and safety-test and at present are more expensive than mass-produced batteries.
As with electric storage technology, compressed air is only as "clean" as the source of the energy that it stores. Life cycle assessment addresses the question of overall emissions from a given energy storage technology combined with a given mix of generation on a power grid.
Engine
A pneumatic motor or compressed-air engine uses the expansion of compressed air to drive the pistons of an engine, turn the axle, or to drive a turbine.
The following methods can increase efficiency:
A continuous expansion turbine at high efficiency
Multiple expansion stages
Use of waste heat, notably in a hybrid heat engine design
Use of environmental heat
A highly efficient arrangement uses high, medium, and low pressure pistons in series, with each stage followed by an airblast venturi that draws ambient air over an air-to-air heat exchanger. This warms the exhaust of the preceding stage and admits this preheated air to the following stage. The only exhaust gas from each stage is cold air, which can be as cold as ; the cold air may be used for air conditioning in a car.
Additional heat can be supplied by burning fuel, as in 1904 for the Whitehead torpedo. This improves the range and speed available for a given tank volume at the cost of the additional fuel.
Cars
Since about 1990, several companies have claimed to be developing compressed-air cars, but none is available. Typically, the main claimed advantages are no roadside pollution, low cost, use of cooking oil for lubrication, and integrated air conditioning.
The time required to refill a depleted tank is important for vehicle applications. "Volume transfer" moves pre-compressed air from a stationary tank to the vehicle tank almost instantaneously. Alternatively, a stationary or on-board compressor can compress air on demand, possibly requiring several hours.
Ships
Large marine diesel engines have started using compressed air, typically stored in large bottles between 20 and 30 bar, acting directly on the pistons via special starting valves to turn the crankshaft prior to beginning fuel injection. This arrangement is more compact and cheaper than an electric starter motor would be at such scales and able to supply the necessary burst of extremely high power without placing a prohibitive load on the ship's electrical generators and distribution system. Compressed air is commonly also used, at lower pressures, to control the engine and act as the spring force acting on the cylinder exhaust valves, and to operate other auxiliary systems and power tools on board, sometimes including pneumatic PID controllers. One advantage of this approach is that, in the event of an electrical blackout, ship systems powered by stored compressed air can continue functioning uninterrupted, and generators can be restarted without an electrical supply. Another is that pneumatic tools can be used in commonly-wet environments without the risk of electric shock.
Hybrid vehicles
While the air storage system offers a relatively low power density and vehicle range, its high efficiency is attractive for hybrid vehicles that use a conventional internal combustion engine as the main power source. The air storage can be used for regenerative braking and to optimize the cycle of the piston engine, which is not equally efficient at all power/RPM levels.
Bosch and PSA Peugeot Citroën have developed a hybrid system that uses hydraulics as a way to transfer energy to and from a compressed nitrogen tank. An up-to-45% reduction in fuel consumption is claimed, corresponding to 2.9 L / 100 km (81 mpg, 69 g /km) on the New European Driving Cycle (NEDC) for a compact frame like Peugeot 208. The system is claimed to be much more affordable than competing electric and flywheel KERS systems and is expected on road cars by 2016.
History of air engines
Air engines have been used since the 19th century to power mine locomotives, pumps, drills, and trams, via centralized, city-level distribution. Racecars use compressed air to start their internal combustion engine (ICE), and large diesel engines may have starting pneumatic motors.
Types of systems
Hybrid systems
Brayton cycle engines compress and heat air with a fuel suitable for an internal combustion engine. For example, burning natural gas or biogas heats compressed air, and then a conventional gas turbine engine or the rear portion of a jet engine expands it to produce work.
Compressed air engines can recharge an electric battery. The apparently-defunct Energine promoted its Pne-PHEV or Pneumatic Plug-in Hybrid Electric Vehicle-system.
Existing hybrid systems
Huntorf, Germany in 1978, and McIntosh, Alabama, U.S. in 1991 commissioned hybrid power plants. Both systems use off-peak energy for air compression and burn natural gas in the compressed air during the power-generating phase.
Future hybrid systems
The Iowa Stored Energy Park (ISEP) would have used aquifer storage rather than cavern storage. The ISEP was an innovative, 270-megawatt, $400 million compressed air energy storage (CAES) project proposed for in-service near Des Moines, Iowa, in 2015. The project was terminated after eight years in development because of site geological limitation, according to the U.S. Department of Energy.
Additional facilities are under development in Norton, Ohio. FirstEnergy, an Akron, Ohio, electric utility, obtained development rights to the 2,700-MW Norton project in November 2009.
The RICAS2020 project attempts to use an abandoned mine for adiabatic CAES with heat recovery. The compression heat is stored in a tunnel section filled with loose stones, so the compressed air is nearly cool when entering the main pressure storage chamber. The cool compressed air regains the heat stored in the stones when released back through a surface turbine, leading to higher overall efficiency. A two-stage process has theoretical higher efficiency of around 70%.
Underwater storage
Bag/tank
Deep water in lakes and the ocean can provide pressure without requiring high-pressure vessels or drilling. The air goes into inexpensive, flexible containers such as plastic bags. Obstacles include the limited number of suitable locations and the need for high-pressure pipelines between the surface and the containers. Given the low cost of the containers, great pressure (and great depth) may not be as important. A key benefit of such systems is that charge and discharge pressures are a constant function of depth. Carnot inefficiencies can be increased by using multiple charge and discharge stages and using inexpensive heat sources and sinks such as cold water from rivers or hot water from solar ponds.
Hydroelectric
A nearly isobaric solution is possible by using the compressed gas to drive a hydroelectric system. This solution requires large pressure tanks on land (as well as underwater airbags). Hydrogen gas is the preferred fluid, since other gases suffer from substantial hydrostatic pressures at even relatively modest depths (~500 meters).
European electrical utility company E.ON has provided €1.4 million (£1.1 million) in funding to develop undersea air storage bags. Hydrostor in Canada is developing a commercial system of underwater storage "accumulators" for compressed air energy storage, starting at the 1- to 4-MW scale.
Buoy
When excess wind energy is available from offshore wind turbines, a spool-tethered buoy can be pushed below the surface. When electricity demand rises, the buoy is allowed to rise towards the surface, generating power.
Nearly isothermal compression
A number of methods of nearly isothermal compression are being developed. Fluid Mechanics has a system with a heat absorbing and releasing structure (HARS) attached to a reciprocating piston. Light Sail injects a water spray into a reciprocating cylinder. SustainX uses an air-water foam mix inside a semi-custom, 120-rpm compressor/expander. All these systems ensure that the air is compressed with high thermal diffusivity compared to the speed of compression. Typically these compressors can run at speeds up to 1000 rpm. To ensure high thermal diffusivity, the average distance a gas molecule is from a heat-absorbing surface is about 0.5 mm. These nearly-isothermal compressors can also be used as nearly-isothermal expanders and are being developed to improve the round-trip efficiency of CAES.
| Technology | Energy storage | null |
436600 | https://en.wikipedia.org/wiki/Decapod | Decapod | The Decapoda or decapods () is a large order of crustaceans within the class Malacostraca, and includes crabs, lobsters, crayfish, shrimp, and prawns. Most decapods are scavengers. The order is estimated to contain nearly 15,000 extant species in around 2,700 genera, with around 3,300 fossil species. Nearly half of these species are crabs, with the shrimp (about 3,000 species) and Anomura including hermit crabs, king crabs, porcelain crabs, squat lobsters (about 2500 species) making up the bulk of the remainder. The earliest fossils of the group date to the Devonian.
Anatomy
Decapods can have as many as 38 appendages, arranged in one pair per body segment. As the name Decapoda (from the Greek , , "ten", and , -pod, "foot") implies, ten of these appendages are considered legs. They are the pereiopods, found on the last five thoracic segments. In many decapods, one pair of these "legs" has enlarged pincers, called chelae, with the legs being called chelipeds. In front of the pereiopods are three pairs of maxillipeds that function as feeding appendages. The head has five pairs of appendages, including mouthparts, antennae, and antennules. There are five more pairs of appendages on the abdomen. They are called pleopods. There is one final pair called uropods, which, with the telson, form the tail fan.
Evolution
A 2019 molecular clock analysis suggested decapods originated in the Late Ordovician around 455 million years ago, with the Dendrobranchiata (prawns) being the first group to diverge. The remaining group, called Pleocyemata, then diverged between the swimming shrimp groupings and the crawling/walking group called Reptantia, consisting of lobsters and crabs. High species diversification can be traced to the Jurassic and Cretaceous periods, which coincides with the rise and spread of modern coral reefs, a key habitat for the decapods. Despite the inferred early origin, the oldest fossils of the group such as Palaeopalaemon only date to the Late Devonian.
The cladogram below shows the internal relationships of Decapoda, from analysis by Wolfe et al. (2019).
In the cladogram above, the clade Glypheidea is excluded due to lack of sufficient DNA evidence, but is likely the sister clade to Polychelida, within Reptantia.
Classification
Classification within the order Decapoda depends on the structure of the gills and legs, and the way in which the larvae develop, giving rise to two suborders: Dendrobranchiata and Pleocyemata. The Dendrobranchiata consist of prawns, including many species colloquially referred to as "shrimp", such as the "white shrimp", Litopenaeus setiferus. The Pleocyemata include the remaining groups, including "true shrimp". Those groups that usually walk rather than swim (Pleocyemata, excluding Stenopodidea and Caridea) form a clade called Reptantia.
This classification to the level of superfamilies follows De Grave et al.
Order Decapoda Latreille, 1802
Suborder Dendrobranchiata Bate, 1888
Penaeoidea Rafinesque, 1815
Sergestoidea Dana, 1852
Suborder Pleocyemata Burkenroad, 1963
Infraorder Stenopodidea Bate, 1888
Infraorder Caridea Dana, 1852
Procaridoidea Chace & Manning, 1972
Galatheacaridoidea Vereshchaka, 1997
Pasiphaeoidea Dana, 1852
Oplophoroidea Dana, 1852
Atyoidea De Haan, 1849
Bresilioidea Calman, 1896
Nematocarcinoidea Smith, 1884
Psalidopodoidea Wood-....., 1874
Stylodactyloidea Bate, 1888
Campylonotoidea Sollaud, 1913
Palaemonoidea Rafinesque, 1815
Alpheoidea Rafinesque, 1815
Processoidea Ortmann, 1896
Pandaloidea Haworth, 1825
Physetocaridoidea Chace, 1940
Crangonoidea Haworth, 1825
Infraorder Astacidea Latreille, 1802
Enoplometopoidea de Saint Laurent, 1988
Nephropoidea Dana, 1852
Astacoidea Latreille, 1802
Parastacoidea Huxley, 1879
Infraorder Glypheidea Winckler, 1882
Glypheoidea Winckler, 1882
Infraorder Axiidea de Saint Laurent, 1979b
Infraorder Gebiidea de Saint Laurent, 1979
Infraorder Achelata Scholtz & Richter, 1995
Infraorder Polychelida Scholtz & Richter, 1995
Infraorder Anomura MacLeay, 1838
Aegloidea Dana, 1852
Galatheoidea Samouelle, 1819
Hippoidea Latreille, 1825a
Chirostyloidea Ortmann, 1892
Lomisoidea Bouvier, 1895
Paguroidea Latreille, 1802
Infraorder Brachyura Linnaeus, 1758
Section Dromiacea De Haan, 1833
Dromioidea De Haan, 1833
Homolodromioidea Alcock, 1900
Homoloidea De Haan, 1839
Section Raninoida De Haan, 1839
Section Cyclodorippoida Ortmann, 1892
Section Eubrachyura de Saint Laurent, 1980
Subsection Heterotremata Guinot, 1977
Aethroidea Dana, 1851
Bellioidea Dana, 1852
Bythograeoidea Williams, 1980
Calappoidea De Haan, 1833
Cancroidea Latreille, 1802
Carpilioidea Ortmann, 1893
Cheiragonoidea Ortmann, 1893
Corystoidea Samouelle, 1819
Dairoidea Serène, 1965
Dorippoidea MacLeay, 1838
Eriphioidea MacLeay, 1838
Gecarcinucoidea Rathbun, 1904
Goneplacoidea MacLeay, 1838
Hexapodoidea Miers, 1886
Leucosioidea Samouelle, 1819
Majoidea Samouelle, 1819
Orithyioidea Dana, 1852c
Palicoidea Bouvier, 1898
Parthenopoidea MacLeay,
Pilumnoidea Samouelle, 1819
Portunoidea Rafinesque, 1815
Potamoidea Ortmann, 1896
Pseudothelphusoidea Ortmann, 1893
Pseudozioidea Alcock, 1898
Retroplumoidea Gill, 1894
Trapezioidea Miers, 1886
Trichodactyloidea H. Milne-Edwards, 1853
Xanthoidea MacLeay, 1838
Subsection Thoracotremata Guinot, 1977
Cryptochiroidea Paul'son, 1875
Grapsoidea MacLeay, 1838
Ocypodoidea Rafinesque, 1815
Pinnotheroidea De Haan, 1833
| Biology and health sciences | Crustaceans | null |
436779 | https://en.wikipedia.org/wiki/Boundary%20value%20problem | Boundary value problem | In the study of differential equations, a boundary-value problem is a differential equation subjected to constraints called boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions.
Boundary value problems arise in several branches of physics as any physical differential equation will have them. Problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. A large class of important boundary value problems are the Sturm–Liouville problems. The analysis of these problems, in the linear case, involves the eigenfunctions of a differential operator.
To be useful in applications, a boundary value problem should be well posed. This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of partial differential equations is devoted to proving that boundary value problems arising from scientific and engineering applications are in fact well-posed.
Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions (solutions to Laplace's equation); the solution was given by the Dirichlet's principle.
Explanation
Boundary value problems are similar to initial value problems. A boundary value problem has conditions specified at the extremes ("boundaries") of the independent variable in the equation whereas an initial value problem has all of the conditions specified at the same value of the independent variable (and that value is at the lower boundary of the domain, thus the term "initial" value). A boundary value is a data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component.
For example, if the independent variable is time over the domain [0,1], a boundary value problem would specify values for at both and , whereas an initial value problem would specify a value of and at time .
Finding the temperature at all points of an iron bar with one end kept at absolute zero and the other end at the freezing point of water would be a boundary value problem.
If the problem is dependent on both space and time, one could specify the value of the problem at a given point for all time or at a given time for all space.
Concretely, an example of a boundary value problem (in one spatial dimension) is
to be solved for the unknown function with the boundary conditions
Without the boundary conditions, the general solution to this equation is
From the boundary condition one obtains
which implies that From the boundary condition one finds
and so One sees that imposing boundary conditions allowed one to determine a unique solution, which in this case is
Types of boundary value problems
Boundary value conditions
A boundary condition which specifies the value of the function itself is a Dirichlet boundary condition, or first-type boundary condition. For example, if one end of an iron rod is held at absolute zero, then the value of the problem would be known at that point in space.
A boundary condition which specifies the value of the normal derivative of the function is a Neumann boundary condition, or second-type boundary condition. For example, if there is a heater at one end of an iron rod, then energy would be added at a constant rate but the actual temperature would not be known.
If the boundary has the form of a curve or surface that gives a value to the normal derivative and the variable itself then it is a Cauchy boundary condition.
Examples
Summary of boundary conditions for the unknown function, , constants and specified by the boundary conditions, and known scalar functions and specified by the boundary conditions.
Differential operators
Aside from the boundary condition, boundary value problems are also classified according to the type of differential operator involved. For an elliptic operator, one discusses elliptic boundary value problems. For a hyperbolic operator, one discusses hyperbolic boundary value problems. These categories are further subdivided into linear and various nonlinear types.
Applications
Electromagnetic potential
In electrostatics, a common problem is to find a function which describes the electric potential of a given region. If the region does not contain charge, the potential must be a solution to Laplace's equation (a so-called harmonic function). The boundary conditions in this case are the Interface conditions for electromagnetic fields. If there is no current density in the region, it is also possible to define a magnetic scalar potential using a similar procedure.
| Mathematics | Differential equations | null |
436824 | https://en.wikipedia.org/wiki/Gerontology | Gerontology | Gerontology ( ) is the study of the social, cultural, psychological, cognitive, and biological aspects of aging. The word was coined by Ilya Ilyich Mechnikov in 1903, from the Greek (), meaning "old man", and (), meaning "study of". The field is distinguished from geriatrics, which is the branch of medicine that specializes in the treatment of existing disease in older adults. Gerontologists include researchers and practitioners in the fields of biology, nursing, medicine, criminology, dentistry, social work, physical and occupational therapy, psychology, psychiatry, sociology, economics, political science, architecture, geography, pharmacy, public health, housing, and anthropology.
The multidisciplinary nature of gerontology means that there are a number of sub-fields which overlap with gerontology. There are policy issues, for example, involved in government planning and the operation of nursing homes, investigating the effects of an aging population on society, and the design of residential spaces for older people that facilitate the development of a sense of place or home. Dr. Lawton, a behavioral psychologist at the Philadelphia Geriatric Center, was among the first to recognize the need for living spaces designed to accommodate the elderly, especially those with Alzheimer's disease. As an academic discipline the field is relatively new. The USC Leonard Davis School of Gerontology created the first PhD, master's and bachelor's degree programs in gerontology in 1975.
History
In the Islamic Golden Age, several physicians wrote on issues related to Gerontology. Avicenna's The Canon of Medicine (1025) offered instruction for the care of the aged, including diet and remedies for problems including constipation. Arabic physician Ibn Al-Jazzar Al-Qayrawani (Algizar, c. 898–980) wrote on the aches and conditions of the elderly. His scholarly work covers sleep disorders, forgetfulness, how to strengthen memory, and causes of mortality. Ishaq ibn Hunayn (died 910) also wrote works on the treatments for forgetfulness.
While the number of aged humans, and the life expectancy, tended to increase in every century since the 14th, society tended to consider caring for an elderly relative as a family issue. It was not until the coming of the Industrial Revolution that ideas shifted in favor of a societal care-system. Some early pioneers, such as Michel Eugène Chevreul, who himself lived to be 102, believed that aging itself should be a science to be studied. Élie Metchnikoff coined the term "gerontology" in 1903.
Modern pioneers like James Birren began organizing gerontology as its own field in the 1940s, later being involved in starting a US government agency on aging—the National Institute on Aging—programs in gerontology at the University of Southern California and University of California, Los Angeles, and as past president of the Gerontological Society of America (founded in 1945).
With the population of people over 60 years old expected to be some 22% of the world's population by 2050, assessment and treatment methods for age-related disease burden—the term geroscience emerged in the early 21st century.
Aging demographics
The world is forecast to undergo rapid population aging in the next several decades. In 1900, there were 3.1 million people aged 65 years and older living in the United States. However, this population continued to grow throughout the 20th century and reached 31.2, 35, and 40.3 million people in 1990, 2000, and 2010, respectively. Notably, in the United States and across the world, the "baby boomer" generation began to turn 65 in 2011. Recently, the population aged 65 years and older has grown at a faster rate than the total population in the United States. The total population increased by 9.7%, from 281.4 million to 308.7 million, between 2000 and 2010. However, the population aged 65 years and older increased by 15.1% during the same period. It has been estimated that 25% of the population in the United States and Canada will be aged 65 years and older by 2025. Moreover, by 2050, it is predicted that, for the first time in United States history, the number of individuals aged 60 years and older will be greater than the number of children aged 0 to 14 years. Those aged 85 years and older (oldest-old) are projected to increase from 5.3 million to 21 million by 2050. Adults aged 85–89 years constituted the greatest segment of the oldest-old in 1990, 2000, and 2010. However, the largest percentage point increase among the oldest-old occurred in the 90- to 94-year-old age group, which increased from 25.0% in 1990 to 26.4% in 2010.
With the rapid growth of the aging population, social work education and training specialized in older adults and practitioners interested in working with older adults are increasingly in demand.
Gender differences with age
There has been a considerable disparity between the number of men and women in the older population in the United States. In both 2000 and 2010, women outnumbered men in the older population at every single year of age (e.g., 65 to 100 years and over). The sex ratio, which is a measure used to indicate the balance of males to females in a population, is calculated by taking the number of males divided by the number of females, and multiplying by 100. Therefore, the sex ratio is the number of males per 100 females. In 2010, there were 90.5 males per 100 females in the 65-year-old population. However, this represented an increase from 1990 when there were 82.7 males per 100 females, and from 2000 when the sex ratio was 88.1. Although the gender gap between men and women has narrowed, women continue to have a greater life expectancy and lower mortality rates at older ages relative to men. For example, the Census 2010 reported that there were approximately twice as many women as men living in the United States at 89 years of age (361,309 versus 176,689, respectively).
Geographic distribution of older adults in the United States
The number and percentage of older adults living in the United States vary across the four different regions (Northeast, Midwest, West, and South) defined by the United States census. In 2010, the South contained the greatest number of people aged 65 years and older and 85 years and older. However, proportionately, the Northeast contains the largest percentage of adults aged 65 years and older (14.1%), followed by the Midwest (13.5%), the South (13.0%), and the West (11.9%). Relative to the Census 2000, all geographic regions demonstrated positive growth in the population of adults aged 65 years and older and 85 years and older. The most rapid growth in the population of adults aged 65 years and older was evident in the West (23.5%), which showed an increase from 6.9 million in 2000 to 8.5 million in 2010. Likewise, in the population aged 85 years and older, the West (42.8%) also showed the fastest growth and increased from 806,000 in 2000 to 1.2 million in 2010. It is worth highlighting that Rhode Island was the only state that experienced a reduction in the number of people aged 65 years and older, and declined from 152,402 in 2000 to 151,881 in 2010. Conversely, all states exhibited an increase in the population of adults aged 85 years and older from 2000 to 2010.
Sub-fields
As with many disciplines, over the course of the 20th and 21st centuries the field of gerontology has sub-divided into multiple specific disciplines focused on increasingly narrow aspects of the aging process.
Biogerontology
Biogerontology is the special sub-field of gerontology concerned with the biological aging process, its evolutionary origins, and potential means to intervene in the process. Aim of biogerontology is to prevent age-related disease by intervening in aging processes or even eliminate aging per se. Some argue that aging fits the criteria of disease, therefore aging is disease and should be treated as disease. In 2008 Aubrey de Grey said that in case of suitable funding and involvement of specialists there is a 50% chance, that in 25–30 years humans will have technology saving people from dying of old age, regardless of the age at which they will be at that time. His idea is to repair inside cells and between them all that can be repaired using modern technology, allowing people to live until time when technology progress will allow to cure deeper damage. This concept got the name "longevity escape velocity".
A meta analysis of 36 studies concluded that there is an association between age and DNA damage in humans, a finding consistent with the DNA damage theory of aging.
Social gerontology
Social gerontology is a multi-disciplinary sub-field that specializes in studying or working with older adults. Social gerontologists may have degrees or training in social work, nursing, psychology, sociology, demography, public health, or other social science disciplines. Social gerontologists are responsible for educating, researching, and advancing the broader causes of older people.
Because issues of life span and life extension need numbers to quantify them, there is an overlap with demography. Those who study the demography of the human life span differ from those who study the social demographics of aging.
Social theories of aging
Several theories of aging are developed to observe the aging process of older adults in society as well as how these processes are interpreted by men and women as they age.
Activity theory
Activity theory was developed and elaborated by Cavan, Havighurst, and Albrecht. According to this theory, older adults' self-concept depends on social interactions. In order for older adults to maintain morale in old age, substitutions must be made for lost roles. Examples of lost roles include retirement from a job or loss of a spouse.
Activity is preferable to inactivity because it facilitates well-being on multiple levels. Because of improved general health and prosperity in the older population, remaining active is more feasible now than when this theory was first proposed by Havighurst nearly six decades ago. The activity theory is applicable for a stable, post-industrial society, which offers its older members many opportunities for meaningful participation. Weakness: Some aging persons cannot maintain a middle-aged lifestyle, due to functional limitations, lack of income, or lack of a desire to do so. Many older adults lack the resources to maintain active roles in society. On the flip side, some elders may insist on continuing activities in late life that pose a danger to themselves and others, such as driving at night with low visual acuity or doing maintenance work to the house while climbing with severely arthritic knees. In doing so, they are denying their limitations and engaging in unsafe behaviors.
Disengagement theory
Disengagement theory was developed by Cumming and Henry. According to this theory, older adults and society engage in a mutual separation from each other. An example of mutual separation is retirement from the workforce. A key assumption of this theory is that older adults lose "ego-energy" and become increasingly self-absorbed. Additionally, disengagement leads to higher morale maintenance than if older adults try to maintain social involvement. This theory is heavily criticized for having an escape clause—namely, that older adults who remain engaged in society are unsuccessful adjusters to old age.
Gradual withdrawal from society and relationships preserves social equilibrium and promotes self-reflection for elders who are freed from societal roles. It furnishes an orderly means for the transfer of knowledge, capital, and power from the older generation to the young. It makes it possible for society to continue functioning after valuable older members die.
Age stratification theory
According to this theory, older adults born during different time periods form cohorts that define "age strata". There are two differences among strata: chronological age and historical experience. This theory makes two arguments. 1. Age is a mechanism for regulating behavior and as a result determines access to positions of power. 2. Birth cohorts play an influential role in the process of social change.
Life course theory
According to this theory, which stems from the life course perspective aging occurs from birth to death. Aging involves social, psychological, and biological processes. Additionally, aging experiences are shaped by cohort and period effects.
Also reflecting the life course focus, consider the implications for how societies might function when age-based norms vanish—a consequence of the deinstitutionalization of the life course—and suggest that these implications pose new challenges for theorizing aging and the life course in postindustrial societies. Dramatic reductions in mortality, morbidity, and fertility over the past several decades have so shaken up the organization of the life course and the nature of educational, work, family, and leisure experiences that it is now possible for individuals to become old in new ways. The configurations and content of other life stages are being altered as well, especially for women. In consequence, theories of age and aging will need to be reconceptualized.
Cumulative advantage/disadvantage theory
According to this theory, which was developed beginning in the 1960s by Derek Price and Robert Merton and elaborated on by several researchers such as Dale Dannefer, inequalities have a tendency to become more pronounced throughout the aging process. A paradigm of this theory can be expressed in the adage "the rich get richer and the poor get poorer". Advantages and disadvantages in early life stages have a profound effect throughout the life span. However, advantages and disadvantages in middle adulthood have a direct influence on economic and health status in later life.
Environmental gerontology
Environmental gerontology is a specialization within gerontology that seeks an understanding and interventions to optimize the relationship between aging persons and their physical and social environments.
The field emerged in the 1930s during the first studies on behavioral and social gerontology. In the 1970s and 1980s, research confirmed the importance of the physical and social environment in understanding the aging population and improved the quality of life in old age. Studies of environmental gerontology indicate that older people prefer to age in their immediate environment, whereas spatial experience and place attachment are important for understanding the process.
Some research indicates that the physical-social environment is related to the longevity and quality of life of the elderly. Precisely, the natural environment (such as natural therapeutic landscapes, therapeutic garden) contributes to active and healthy aging in the place.
Jurisprudential gerontology
Jurisprudential gerontology (sometimes referred to as "geriatric jurisprudence") is a specialization within gerontology that looks into the ways laws and legal structures interact with the aging experience. The field started from legal scholars in the field of elder law, which found that looking into legal issues of older persons without a broader inter-disciplinary perspective does not provide the ideal legal outcome. Using theories such as therapeutic jurisprudence, jurisprudential scholars critically examined existing legal institutions (e.g. adult guardianship, end of life care, or nursing homes regulations) and showed how law should look more closely to the social and psychological aspects of its real-life operation. Other streams within jurisprudential gerontology also encouraged physicians and lawyers to try to improve their cooperation and better understand how laws and regulatory institutions affect health and well-being of older persons.
| Biology and health sciences | Health and fitness: General | Health |
436825 | https://en.wikipedia.org/wiki/Geriatrics | Geriatrics | Geriatrics, or geriatric medicine, is a medical specialty focused on providing care for the unique health needs of the elderly. The term geriatrics originates from the Greek γέρων geron meaning "old man", and ιατρός iatros meaning "healer". It aims to promote health by preventing, diagnosing and treating disease in older adults. There is no defined age at which patients may be under the care of a geriatrician, or geriatric physician, a physician who specializes in the care of older people. Rather, this decision is guided by individual patient need and the caregiving structures available to them. This care may benefit those who are managing multiple chronic conditions or experiencing significant age-related complications that threaten quality of daily life. Geriatric care may be indicated if caregiving responsibilities become increasingly stressful or medically complex for family and caregivers to manage independently.
There is a distinction between geriatrics and gerontology. Gerontology is the multidisciplinary study of the aging process, defined as the decline in organ function over time in the absence of injury, illness, environmental risks or behavioral risk factors. However, geriatrics is sometimes called medical gerontology.
Scope
Differences between adult and geriatric medicine
Geriatric providers receive specialized training in caring for elderly patients and promoting healthy aging. The care provided is one largely based on shared-decision making and is driven by patient goals and preferences, which can vary from preserving function, improving quality of life, or prolonging years of life. A guiding mnemonic commonly used by geriatricians in the United States and Canada is the 5 M's of Geriatrics which describes mind, mobility, multicomplexity, medications and matters most to elicit patient values.
It is common for elderly adults to be managing multiple long-term conditions (multimorbidity). Age-associated changes in physiology drive a compounded increase in susceptibility to illness, disease-associated morbidity, and death. Furthermore, common diseases may present atypically in elderly patients, adding further diagnostic and therapeutical complexity in patient care.
Geriatrics is highly interdisciplinary consisting of specialty providers from the fields of medicine, nursing, pharmacy, social work, physical and occupational therapy. Elderly patients can receive care related to medication management, pain management, psychiatric and memory care, rehabilitation, long-term nursing care, nutrition and different forms of therapy including physical, occupational and speech. Non-medical considerations include social services, transitional care, advanced directives, power of attorney and other legal considerations.
Increased complexity
The decline in physiological reserve in organs makes the elderly develop some kinds of diseases and have more complications from mild problems (such as dehydration from a mild gastroenteritis). Multiple problems may compound: A mild fever in elderly persons may cause confusion, which may lead to a fall and to a fracture of the neck of the femur ("broken hip").The presentation of disease in elderly persons may be vague and non-specific, or it may include delirium or falls. (Pneumonia, for example, may present with low-grade fever and confusion, rather than the high fever and cough seen in younger people.) Some elderly people may find it hard to describe their symptoms in words, especially if the disease is causing confusion, or if they have cognitive impairment. Delirium in the elderly may be caused by a minor problem such as constipation or by something as serious and life-threatening as a heart attack. Many of these problems are treatable, if the root cause can be discovered.
Cognition
Mild cognitive impairment (MCI) is a transitional state between normal aging and Dementia, affecting 10-20% of adults over 65 (Schwarz, 2015). Geriatricians encounter MCI patients in various care settings, with diagnosis relying on clinical assessment and mental status examinations (Tangalos & Petersen, 2018). MCI is highly prevalent among older adults with depression and may persist after depression remits (Lee et al., 2006). While MCI is considered a high-risk condition for developing Alzheimer's disease, there is heterogeneity in its presentation and outcomes (Petersen et al., 2001).
Dementia is a prevalent condition in geriatric populations, affecting cognitive function and daily activities (Talawar, 2018; Mirzapure et al., 2022). Alzheimer's disease is the most common cause, accounting for 40-80% of cases (Mirzapure et al., 2022; Chulakadabba et al., 2020). Geriatric patients with dementia often have comorbidities and other geriatric syndromes, requiring holistic and integrated care (Chulakadabba et al., 2020; Nguyen et al., 2023). Geriatricians play a crucial role in dementia care, but many feel current training is inadequate and seek more structured experiences (Mayne et al., 2014). Improving access to geriatricians and enhancing general practitioners' diagnostic skills could improve timely and accurate dementia diagnosis (Mansfield et al., 2022). However, there are significant shortages of dementia specialists, particularly in rural areas (Liu et al., 2024; Christley et al., 2022). Geriatricians support comprehensive post-diagnosis information provision, including sensitive topics like advance care planning (Mansfield et al., 2022). Collaboration between specialists and family physicians is essential, with specialists often handling contentious issues like driving competency (Hum et al., 2014). Geriatric training may influence end-of-life care patterns for dementia patients (Gotanda et al., 2023). A geriatrics perspective emphasizes prevention, considering lifestyle factors that promote healthy cognitive aging (Steffens, 2018).
Geriatric pharmacology
Elderly people require specific attention to medications. Elderly people particularly are subjected to polypharmacy (taking multiple medications) given their accumulation of multiple chronic diseases. Many of these individuals have also self-prescribed many herbal medications and over-the-counter drugs. This polypharmacy, in combination with geriatric status, may increase the risk of drug interactions or adverse drug reactions. Pharmacokinetic and pharmacodynamic changes arise with older age, impairing their ability to metabolize and respond to drugs. Each of the four pharmacokinetic mechanisms (absorption, distribution, metabolism, excretion) are disrupted by age-related physiologic changes. For example, overall decreased hepatic function can interfere with clearance or metabolism of drugs and reductions in kidney function can affect renal elimination. Pharmacodynamic changes lead altered sensitivity to drugs in geriatric patients, such as increased pain relief with morphine use. Therefore, geriatric individuals require specialized pharmacological care that is informed by these age-related changes.
Geriatric syndromes
Geriatric syndromes is a term used to describe a group of clinical conditions that are highly prevalent in elderly people. These syndromes are not caused by specific pathology or disease, rather, are a manifestation of multifactorial conditions affecting several organ systems. Common conditions include frailty, functional decline, falls, loss in continence and malnutrition, amongst others.
Frailty
Frailty is marked by a decline in physiological reserve, increased vulnerability to physiological and emotional stressors, and loss of function. This may present as progressive and unintentional weight loss, fatigue, muscular weakness and decreased mobility. It is associated with increased injuries, hospitalization and adverse clinical outcomes.
Functional decline
Functional disability can arise from a decline in physical function and/or cognitive function. It is associated with an acquired difficulty in performing basic everyday tasks resulting in an increased dependence of other individuals and/or medical devices. These tasks are sub-divided into basic activities of daily living (ADL) and instrumental activities of daily living (IADL) and are commonly used as an indicator of a person's functional status.
Activities of daily living (ADL) are fundamental skills needed to care for oneself, including feeding, personal hygiene, toileting, transferring and ambulating. Instrumental activities of daily living (IADL) describe more complex skills needed to allow oneself to live independently in a community, including cooking, housekeeping, managing one's finances and medications. Routine monitoring of ADL and IADL is an important functional assessment used by clinicians to determine the extent of support and care to provide to elderly adults and their caregivers. It serves as a qualitative measurement of function over time and predicts the need for alternative living arrangements or models of care, including senior housing apartments, skilled nursing facilities, palliative, hospice or home-based care.
Falls
Falls are the leading cause of emergency department admissions and hospitalizations in adults age 65 and older, many of which result in significant injury and permanent disability. As certain risk factors can be modifiable for the purpose of reducing falls, this highlights an opportunity for intervention and risk reduction. Modifiable factors include:
Improving balance and muscle strength.
Removing environmental hazards.
Encouraging use of assistive devices.
Treating chronic conditions.
Adjusting medication.
Urinary incontinence
Urinary incontinence or overactive bladder symptoms is defined as unintentionally urinating oneself. These symptoms can be caused by medications that increase urine output and frequency (e.g. anti-hypertensives and diuretics), urinary tract infections, pelvic organ prolapse, pelvic floor dysfunction, and diseases that damage the nerves that regulate bladder emptying. Other musculoskeletal conditions affecting mobility should be considered, as these can make accessing bathrooms difficult.
Malnutrition
Malnutrition and poor nutritional status is an area of concern, affecting 12% to 50% of hospitalized elderly patients and 23% to 50% of institutionalized elderly patients living in long-term care facilities such as assisted living communities and skilled nursing facilities. As malnutrition can occur due to a combination of physiologic, pathologic, psychologic and socioeconomic factors, it can be difficult to identify effective interventions. Physiologic factors include reduced smell and taste, and a decreased metabolic rate affecting nutritional food intake. Unintentional weight loss can result from pathologic factors, including a wide range of chronic diseases that affect cognitive function, directly impact digestion (e.g. poor dentition, gastrointestinal cancers, gastroesophageal reflux disease) or may be managed with dietary restrictions (e.g. congestive heart failure, diabetes mellitus, hypertension). Psychologic factors include conditions including depression, anorexia, and grief.
Practical concerns
Functional abilities, independence and quality of life issues are of great concern to geriatricians and their patients. Elderly people generally want to live independently as long as possible, which requires them to be able to engage in self-care and other activities of daily living. A geriatrician may be able to provide information about elder care options, and refers people to home care services, skilled nursing facilities, assisted living facilities, and hospice as appropriate.
Frail elderly people may choose to decline some kinds of medical care, because the risk-benefit ratio is different. For example, frail elderly women routinely stop screening mammograms, because breast cancer is typically a slowly growing disease that would cause them no pain, impairment, or loss of life before they would die of other causes. Frail people are also at significant risk of post-surgical complications and the need for extended care, and an accurate prediction—based on validated measures, rather than how old the patient's face looks—can help older patients make fully informed choices about their options. Assessment of older patients before elective surgeries can accurately predict the patients' recovery trajectories. One frailty scale uses five items: unintentional weight loss, muscle weakness, exhaustion, low physical activity, and slowed walking speed. A healthy person scores 0; a very frail person scores 5. Compared to non-frail elderly people, people with intermediate frailty scores (2 or 3) are twice as likely to have post-surgical complications, spend 50% more time in the hospital, and are three times as likely to be discharged to a skilled nursing facility instead of to their own homes. Frail elderly patients (score of 4 or 5) who were living at home before the surgery have even worse outcomes, with the risk of being discharged to a nursing home rising to twenty times the rate for non-frail elderly people.
Subspecialties and related services
Some diseases commonly seen in elderly are rare in adults, e.g., dementia, delirium, falls. As societies aged, many specialized geriatric- and geriatrics-related services emerged including:
Medical
Geriatric cardiology or cardiogeriatrics.
Geriatric dentistry.
Geriatric dermatology.
Geriatric diagnostic imaging.
Geriatric emergency medicine.
Geriatric nephrology.
Geriatric neurology.
Geriatric oncology.
Geriatric physical examination of interest especially to physicians & physician assistants.
Geriatric psychiatry or psychogeriatrics (focus on dementia, delirium, depression and other psychiatric disorders).
Geriatric public health or preventive geriatrics
Geriatric rehabilitation.
Geriatric rheumatology (focus on joints and soft tissue disorders in elderly).
Geriatric sexology (focus on sexuality in aged people).
Geriatric subspeciality medical clinics (such as geriatric anticoagulation clinic, geriatric assessment clinic, falls and balance clinic, continence clinic, palliative care clinic, elderly pain clinic, cognition and memory disorders clinic).
Surgical
Geriatric orthopaedics or orthogeriatrics (close cooperation with orthopedic surgery and a focus on osteoporosis and rehabilitation).
Geriatric cardiothoracic surgery.
Geriatric urology.
Geriatric otolaryngology.
Geriatric general surgery.
Geriatric trauma.
Geriatric gynecology.
Geriatric ophthalmology.
Perioperative medicine for Older People having Surgery (POPS)
Other geriatrics subspecialties
Geriatric anesthesia (focuses on anesthesia & perioperative care of elderly).
Geriatric intensive-care unit: (a special type of intensive care unit dedicated to critically ill elderly).
Geriatric nursing (focuses on nursing of elderly patients and the aged).
Geriatric nutrition.
Geriatric occupational therapy.
Geriatric pain management.
Geriatric pharmacy.
Geriatric optometry.
Geriatric physical therapy.
Geriatric podiatry.
Geriatric psychology.
Geriatric speech-language pathology (focuses on neurological disorders such as dysphagia, stroke, aphasia, and traumatic brain injury).
Geriatric mental health counselor/specialist (focuses on treatment more so than assessment).
Geriatric audiology.
History
A number of physicians in the Byzantine Empire studied geriatrics, with doctors like Aëtius of Amida evidently specializing in the field. Alexander of Tralles viewed the process of aging as a natural and inevitable form of marasmus, caused by the loss of moisture in body tissue. The works of Aëtius describe the mental and physical symptoms of aging. Theophilus Protospatharius and Joannes Actuarius also discussed the topic in their medical works. Byzantine physicians typically drew on the works of Oribasius and recommended that elderly patients consume a diet rich in foods that provide "heat and moisture". They also recommended frequent bathing, massaging, rest, and low-intensity exercise regimens.
In The Canon of Medicine, written by Avicenna in 1025, the author was concerned with how "old folk need plenty of sleep" and how their bodies should be anointed with oil, and recommended exercises such as walking or horse-riding. Thesis III of the Canon discussed the diet suitable for old people, and dedicated several sections to elderly patients who become constipated.
The Arab physician Algizar (–980) wrote a book on the medicine and health of the elderly. He also wrote a book on sleep disorders and another one on forgetfulness and how to strengthen memory, and a treatise on causes of mortality. Another Arab physician in the 9th century, Ishaq ibn Hunayn (died 910), the son of Nestorian Christian scholar Hunayn Ibn Ishaq, wrote a Treatise on Drugs for Forgetfulness.
George Day published the Diseases of Advanced Life in 1849, one of the first publications on the subject of geriatric medicine. The first modern geriatric hospital was founded in Belgrade, Serbia, in 1881 by doctor Laza Lazarević.
The term geriatrics was proposed in 1908 by Ilya Ilyich Mechnikov, Laurate of the Nobel Prize for Medicine and later by 1909 by Ignatz Leo Nascher, former Chief of Clinic in the Mount Sinai Hospital Outpatient Department (New York City) and a "father" of geriatrics in the United States.
Modern geriatrics in the United Kingdom began with the "mother" of geriatrics, Marjory Warren. Warren emphasized that rehabilitation was essential to the care of older people. Using her experiences as a physician in a London Workhouse infirmary, she believed that merely keeping older people fed until they died was not enough; they needed diagnosis, treatment, care, and support. She found that patients, some of whom had previously been bedridden, were able to gain some degree of independence with the correct assessment and treatment.
The practice of geriatrics in the UK is also one with a rich multidisciplinary history. It values all the professions, not just medicine, for their contributions in optimizing the well-being and independence of older people.
Another innovator of British geriatrics is Bernard Isaacs, who described the "giants" of geriatrics mentioned above: immobility and instability, incontinence, and impaired intellect. Isaacs asserted that, if examined closely enough, all common problems with older people relate to one or more of these giants.
The care of older people in the UK has been advanced by the implementation of the National Service Frameworks for Older People, which outlines key areas for attention.
Geriatrician training
United States
In the United States, geriatricians are primary-care physicians (D.O. or M.D.) who are board-certified in either family medicine or internal medicine and who have also acquired the additional training necessary to obtain the Certificate of Added Qualifications (CAQ) in geriatric medicine. Geriatricians have developed an expanded expertise in the aging process, the impact of aging on illness patterns, drug therapy in seniors, health maintenance, and rehabilitation. They serve in a variety of roles including hospital care, long-term care, home care, and terminal care. They are frequently involved in ethics consultations to represent the unique health and diseases patterns seen in seniors. The model of care practiced by geriatricians is heavily focused on working closely with other disciplines such as nurses, pharmacists, therapists, and social workers.
United Kingdom
In the United Kingdom, most geriatricians are hospital physicians, whereas others focus on community geriatrics in particular. Although originally a distinct clinical specialty, it has been integrated as a specialization of general medicine since the late 1970s. Most geriatricians are, therefore, accredited for both. Unlike in the United States, geriatric medicine is a major specialty in the United Kingdom and are the single most numerous internal medicine specialists.
Canada
In Canada, there are two pathways that can be followed in order to work as a physician in a geriatric setting.
Doctors of Medicine (M.D.) can complete a three-year core internal medicine residency program, followed by two years of specialized geriatrics residency training. This pathway leads to certification, and possibly fellowship after several years of supplementary academic training, by the Royal College of Physicians and Surgeons of Canada.
Doctors of Medicine (M.D.) can opt for a two-year residency program in family medicine and complete a one-year enhanced skills program in care of the elderly. This post-doctoral pathway is accredited by the College of Family Physicians of Canada.
Many universities across Canada also offer gerontology training programs for the general public, such that nurses and other health care professionals can pursue further education in the discipline in order to better understand the process of aging and their role in the presence of older patients and residents.
India
In India, Geriatrics is a relatively new speciality offering. A three-year post graduate residency (M.D) training can be joined for after completing the 5.5-year undergraduate training of MBBS (Bachelor of Medicine and Bachelor of Surgery). Unfortunately, only eight major institutes provide M.D in Geriatric Medicine and subsequent training. Training in some institutes are exclusive in the Department of Geriatric Medicine, with rotations in Internal medicine, medical subspecialties etc. but in certain institutions, are limited to 2-year training in Internal medicine and subspecialities followed by one year of exclusive training in Geriatric Medicine.
Minimum geriatric competencies
In July 2007, the Association of American Medical Colleges (AAMC) and the John A. Hartford Foundation hosted a National Consensus Conference on Competencies in Geriatric Education where a consensus was reached on minimum competencies (learning outcomes) that graduating medical students needed to assure competent care by new interns to older patients. Twenty-six (26) Minimum Geriatric Competencies in eight content domains were endorsed by the American Geriatrics Society (AGS), the American Medical Association (AMA), and the Association of Directors of Geriatric Academic Programs (ADGAP). The domains are: cognitive and behavioral disorders; medication management; self-care capacity; falls, balance, gait disorders; atypical presentation of disease; palliative care; hospital care for elders, and health care planning and promotion. Each content domain specifies three or more observable, measurable competencies.
Research
Changes in physiology with aging may alter the absorption, the effectiveness and the side effect profile of many drugs. These changes may occur in oral protective reflexes (dryness of the mouth caused by diminished salivary glands), in the gastrointestinal system (such as with delayed emptying of solids and liquids possibly restricting speed of absorption), and in the distribution of drugs with changes in body fat and muscle and drug elimination.
Psychological considerations include the fact that elderly persons (in particular, those experiencing substantial memory loss or other types of cognitive impairment) are unlikely to be able to adequately monitor and adhere to their own scheduled pharmacological administration. One study (Hutchinson et al., 2006) found that 25% of participants studied admitted to skipping doses or cutting them in half. Self-reported noncompliance with adherence to a medication schedule was reported by a striking one-third of the participants. Further development of methods that might possibly help monitor and regulate dosage administration and scheduling is an area that deserves attention.
Another important area is the potential for improper administration and use of potentially inappropriate medications, and the possibility of errors that could result in dangerous drug interactions. Polypharmacy is often a predictive factor. Research done on home/community health care found that "nearly 1 of 3 medical regimens contain a potential medication error".
Ethical and medico-legal issues
Elderly persons sometimes cannot make decisions for themselves. They may have previously prepared a power of attorney and advance directives to provide guidance if they are unable to understand what is happening to them, whether this is due to long-term dementia or to a short-term, correctable problem, such as delirium from a fever.
Geriatricians must respect the patients' privacy while seeing that they receive appropriate and necessary services. More than most specialties, they must consider whether the patient has the legal responsibility and competence to understand the facts and make decisions. They must support informed consent and resist the temptation to manipulate the patient by withholding information, such as the dismal prognosis for a condition or the likelihood of recovering from surgery at home.
Elder abuse is the physical, financial, emotional, sexual, or other type of abuse of an older dependent. Adequate training, services, and support can reduce the likelihood of elder abuse, and proper attention can often identify it. For elderly people who are unable to care for themselves, geriatricians may recommend legal guardianship or conservatorship to care for the person or the estate.
Elder abuse occurs increasingly when caregivers of elderly relatives have a mental illness. These instances of abuse can be prevented by engaging these individuals with mental illness in mental health treatment. Additionally, interventions aimed at decreasing elder reliance on relatives may help decrease conflict and abuse. Family education and support programs conducted by mental health professionals may also be beneficial for elderly patients to learn how to set limits with relatives with psychiatric disorders without causing conflict that leads to abuse.
| Biology and health sciences | Fields of medicine | null |
436896 | https://en.wikipedia.org/wiki/Tarantula%20hawk | Tarantula hawk | A tarantula hawk is a spider wasp (Pompilidae) that preys on tarantulas. Tarantula hawks belong to any of the many species in the genera Pepsis and Hemipepsis. They are one of the largest parasitoid wasps, using their sting to paralyze their prey before dragging it into a brood nest as living food; a single egg is laid on the prey, hatching to a larva which eats the still-living host. They are found on all continents other than Europe and Antarctica.
Description
These wasps grow up to long, making them among the largest of wasps, and have blue-black bodies and bright, rust-colored wings (other species have black wings with blue highlights). The vivid coloration found on their bodies, and especially wings, is aposematic, advertising to potential predators the wasps' ability to deliver a powerful sting. Their long legs have hooked claws for grappling with their victims. The stinger of a female Pepsis grossa can be up to long, and the powerful sting is considered one of the most painful insect stings in the world.
Behavior
The female tarantula hawk wasp stings a tarantula between the legs, paralyzing it, and then drags the prey to a specially prepared burrow, where a single egg is laid on the spider's abdomen, and the burrow entrance is covered. Sex of offspring is determined by fertilization; fertilized eggs produce females, while unfertilized eggs produce males. When the wasp larva hatches, it creates a small hole in the spider's abdomen, then enters and feeds voraciously, avoiding vital organs for as long as possible to keep the spider alive. After several weeks, the larva pupates. Finally, the wasp becomes an adult and emerges from the spider's abdomen to continue the life cycle.
Adult tarantula hawks are nectarivorous. While the wasps tend to be most active in the daytime in summer, they tend to avoid high temperatures. The male tarantula hawk does not hunt. Both males and females feed on the flowers of milkweeds, western soapberry trees, or mesquite trees. Male tarantula hawks have been observed practicing a behavior called hill-topping, in which they sit atop tall plants and watch for passing females ready to reproduce. The males can become resident defenders of the favorable reproduction spots for hours into the afternoon. Females are not very aggressive, in that they are hesitant to sting, but the sting is extraordinarily painful.
Distribution
Worldwide distribution of tarantula hawks includes areas from India to Southeast Asia, Africa, Australia, and the Americas, with the genus Pepsis entirely restricted to the New World. In the latter, Pepsis species have been observed from as far north as Logan, Utah and south as far as Argentina, with at least 250 species living in South America. Eighteen species of Pepsis and three species of Hemipepsis are found in the United States, primarily in the deserts of the southwestern United States, with Pepsis grossa (formerly Pepsis formosa) and Pepsis thisbe being common. The two species are difficult to distinguish, but the majority of P. grossa have metallic blue bodies and reddish antennae, which separates them from P. thisbe. Both species have bright orange wings that become transparent near the tip.
Sting
Tarantula hawk wasps are relatively docile and rarely sting without provocation. However, the sting—particularly that of P. grossa—is among the most painful of all insects, though the intense pain only lasts about five minutes. One researcher described the pain as "...immediate, excruciating, unrelenting pain that simply shuts down one's ability to do anything, except scream. Mental discipline simply does not work in these situations." In terms of scale, the wasp's sting is rated near the top of the Schmidt sting pain index, second only to that of the bullet ant, and is described by Schmidt as "blinding, fierce[, and] shockingly electric". Because of their extremely large stingers, very few animals are able to eat them; one of the few that can is the roadrunner. Many predatory animals avoid these wasps, and many different insects mimic them, including various other wasps and bees (Müllerian mimics), as well as moths, flies (e.g., mydas flies), and beetles (e.g., Tragidion) (Batesian mimics).
Aside from the possibility of triggering an allergic reaction, the sting is not dangerous and does not require medical attention. Local redness appears in most cases after the pain, and lasts for up to a week.
State insect of New Mexico
The U.S. state of New Mexico chose a species of tarantula hawk (specifically, P. formosa, now known as P. grossa) in 1989 to become its official state insect. Its selection was prompted by a group of elementary school children from Edgewood doing research on states that had adopted state insects. They selected three insects as candidates and mailed ballots to all schools for a statewide election. The winner was the tarantula hawk wasp.
| Biology and health sciences | Hymenoptera | Animals |
437052 | https://en.wikipedia.org/wiki/Positional%20notation | Positional notation | Positional notation, also known as place-value notation, positional numeral system, or simply place value, usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the values may be modified when combined). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string.
The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits.
Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers.
The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe.
History
Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous. Other bases have been used in the past, and some continue to be used today. For example, the Babylonian numeral system, credited as the first positional numeral system, was base-60. However, it lacked a real zero. Initially inferred only from context, later, by about 700 BC, zero came to be indicated by a "space" or a "punctuation symbol" (such as two slanted wedges) between numerals. It was a placeholder rather than a true zero because it was not used alone or at the end of a number. Numbers like 2 and 120 (2×60) looked the same because the larger number lacked a final placeholder. Only context could differentiate them.
The polymath Archimedes (ca. 287–212 BC) invented a decimal positional system based on 108 in his Sand Reckoner; 19th century German mathematician Carl Gauss lamented how science might have progressed had Archimedes only made the leap to something akin to the modern decimal system. Hellenistic and Roman astronomers used a base-60 system based on the Babylonian model (see ).
Before positional notation became standard, simple additive systems (sign-value notation) such as Roman numerals were used, and accountants in ancient Rome and during the Middle Ages used the abacus or stone counters to do arithmetic.
Counting rods and most abacuses have been used to represent numbers in a positional numeral system. With counting rods or abacus to perform arithmetic operations, the writing of the starting, intermediate and final values of a calculation could easily be done with a simple additive system in each position or column. This approach required no memorization of tables (as does positional notation) and could produce practical results quickly.
The oldest extant positional notation system is either that of Chinese rod numerals, used from at least the early 8th century, or perhaps Khmer numerals, showing possible usages of positional-numbers in the 7th century. Khmer numerals and other Indian numerals originate with the Brahmi numerals of about the 3rd century BC, which symbols were, at the time, not used positionally. Medieval Indian numerals are positional, as are the derived Arabic numerals, recorded from the 10th century.
After the French Revolution (1789–1799), the new French government promoted the extension of the decimal system. Some of those pro-decimal efforts—such as decimal time and the decimal calendar—were unsuccessful. Other French pro-decimal efforts—currency decimalisation and the metrication of weights and measures—spread widely out of France to almost the whole world.
History of positional fractions
Decimal fractions were first developed and used by the Chinese in the form of rod calculus in the 1st century BC, and then spread to the rest of the world. J. Lennart Berggren notes that positional decimal fractions were first used in the Arab by mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350, but did not develop any notation to represent them. The Persian mathematician Jamshīd al-Kāshī made the same discovery of decimal fractions in the 15th century. Al Khwarizmi introduced fractions to Islamic countries in the early 9th century; his fraction presentation was similar to the traditional Chinese mathematical fractions from Sunzi Suanjing. This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithmetic Key".
The adoption of the decimal representation of numbers less than one, a fraction, is often credited to Simon Stevin through his textbook De Thiende; but both Stevin and E. J. Dijksterhuis indicate that Regiomontanus contributed to the European adoption of general decimals:
European mathematicians, when taking over from the Hindus, via the Arabs, the idea of positional value for integers, neglected to extend this idea to fractions. For some centuries they confined themselves to using common and sexagesimal fractions ... This half-heartedness has never been completely overcome, and sexagesimal fractions still form the basis of our trigonometry, astronomy and measurement of time. ¶ ... Mathematicians sought to avoid fractions by taking the radius R equal to a number of units of length of the form 10n and then assuming for n so great an integral value that all occurring quantities could be expressed with sufficient accuracy by integers. ¶ The first to apply this method was the German astronomer Regiomontanus. To the extent that he expressed goniometrical line-segments in a unit R/10n, Regiomontanus may be called an anticipator of the doctrine of decimal positional fractions.
In the estimation of Dijksterhuis, "after the publication of De Thiende only a small advance was required to establish the complete system of decimal positional fractions, and this step was taken promptly by a number of writers ... next to Stevin the most important figure in this development was Regiomontanus." Dijksterhuis noted that [Stevin] "gives full credit to Regiomontanus for his prior contribution, saying that the trigonometric tables of the German astronomer actually contain the whole theory of 'numbers of the tenth progress'."
Mathematics
Base of the numeral system
In mathematical numeral systems the radix is usually the number of unique digits, including zero, that a positional numeral system uses to represent numbers. In some cases, such as with a negative base, the radix is the absolute value of the base . For example, for the decimal system the radix (and base) is ten, because it uses the ten digits from 0 through 9. When a number "hits" 9, the next number will not be another different symbol, but a "1" followed by a "0". In binary, the radix is two, since after it hits "1", instead of "2" or another written symbol, it jumps straight to "10", followed by "11" and "100".
The highest symbol of a positional numeral system usually has the value one less than the value of the radix of that numeral system. The standard positional numeral systems differ from one another only in the base they use.
The radix is an integer that is greater than 1, since a radix of zero would not have any digits, and a radix of 1 would only have the zero digit. Negative bases are rarely used. In a system with more than unique digits, numbers may have many different possible representations.
It is important that the radix is finite, from which follows that the number of digits is quite low. Otherwise, the length of a numeral would not necessarily be logarithmic in its size.
(In certain non-standard positional numeral systems, including bijective numeration, the definition of the base or the allowed digits deviates from the above.)
In standard base-ten (decimal) positional notation, there are ten decimal digits and the number
.
In standard base-sixteen (hexadecimal), there are the sixteen hexadecimal digits (0–9 and A–F) and the number
where B represents the number eleven as a single symbol.
In general, in base-b, there are b digits and the number
has
Note that represents a sequence of digits, not multiplication.
Notation
When describing base in mathematical notation, the letter b is generally used as a symbol for this concept, so, for a binary system, b equals 2. Another common way of expressing the base is writing it as a decimal subscript after the number that is being represented (this notation is used in this article). 11110112 implies that the number 1111011 is a base-2 number, equal to 12310 (a decimal notation representation), 1738 (octal) and 7B16 (hexadecimal). In books and articles, when using initially the written abbreviations of number bases, the base is not subsequently printed: it is assumed that binary 1111011 is the same as 11110112.
The base b may also be indicated by the phrase "base-b". So binary numbers are "base-2"; octal numbers are "base-8"; decimal numbers are "base-10"; and so on.
To a given radix b the set of digits {0, 1, ..., b−2, b−1} is called the standard set of digits. Thus, binary numbers have digits {0, 1}; decimal numbers have digits and so on. Therefore, the following are notational errors: 522, 22, 1A9. (In all cases, one or more digits is not in the set of allowed digits for the given base.)
Exponentiation
Positional numeral systems work using exponentiation of the base. A digit's value is the digit multiplied by the value of its place. Place values are the number of the base raised to the nth power, where n is the number of other digits between a given digit and the radix point. If a given digit is on the left hand side of the radix point (i.e. its value is an integer) then n is positive or zero; if the digit is on the right hand side of the radix point (i.e., its value is fractional) then n is negative.
As an example of usage, the number 465 in its respective base b (which must be at least base 7 because the highest digit in it is 6) is equal to:
If the number 465 was in base-10, then it would equal:
(46510 = 46510)
If however, the number were in base 7, then it would equal:
(4657 = 24310)
10b = b for any base b, since 10b = 1×b1 + 0×b0. For example, 102 = 2; 103 = 3; 1016 = 1610. Note that the last "16" is indicated to be in base 10. The base makes no difference for one-digit numerals.
This concept can be demonstrated using a diagram. One object represents one unit. When the number of objects is equal to or greater than the base b, then a group of objects is created with b objects. When the number of these groups exceeds b, then a group of these groups of objects is created with b groups of b objects; and so on. Thus the same number in different bases will have different values:
241 in base 5:
2 groups of 52 (25) 4 groups of 5 1 group of 1
ooooo ooooo
ooooo ooooo ooooo ooooo
ooooo ooooo + + o
ooooo ooooo ooooo ooooo
ooooo ooooo
241 in base 8:
2 groups of 82 (64) 4 groups of 8 1 group of 1
oooooooo oooooooo
oooooooo oooooooo
oooooooo oooooooo oooooooo oooooooo
oooooooo oooooooo + + o
oooooooo oooooooo
oooooooo oooooooo oooooooo oooooooo
oooooooo oooooooo
oooooooo oooooooo
The notation can be further augmented by allowing a leading minus sign. This allows the representation of negative numbers. For a given base, every representation corresponds to exactly one real number and every real number has at least one representation. The representations of rational numbers are those representations that are finite, use the bar notation, or end with an infinitely repeating cycle of digits.
Digits and numerals
A digit is a symbol that is used for positional notation, and a numeral consists of one or more digits used for representing a number with positional notation. Today's most common digits are the decimal digits "0", "1", "2", "3", "4", "5", "6", "7", "8", and "9". The distinction between a digit and a numeral is most pronounced in the context of a number base.
A non-zero numeral with more than one digit position will mean a different number in a different number base, but in general, the digits will mean the same. For example, the base-8 numeral 238 contains two digits, "2" and "3", and with a base number (subscripted) "8". When converted to base-10, the 238 is equivalent to 1910, i.e. 238 = 1910. In our notation here, the subscript "8" of the numeral 238 is part of the numeral, but this may not always be the case.
Imagine the numeral "23" as having an ambiguous base number. Then "23" could likely be any base, from base-4 up. In base-4, the "23" means 1110, i.e. 234 = 1110. In base-60, the "23" means the number 12310, i.e. 2360 = 12310. The numeral "23" then, in this case, corresponds to the set of base-10 numbers {11, 13, 15, 17, 19, 21, 23, ..., 121, 123} while its digits "2" and "3" always retain their original meaning: the "2" means "two of", and the "3" means "three of".
In certain applications when a numeral with a fixed number of positions needs to represent a greater number, a higher number-base with more digits per position can be used. A three-digit, decimal numeral can represent only up to 999. But if the number-base is increased to 11, say, by adding the digit "A", then the same three positions, maximized to "AAA", can represent a number as great as 1330. We could increase the number base again and assign "B" to 11, and so on (but there is also a possible encryption between number and digit in the number-digit-numeral hierarchy). A three-digit numeral "ZZZ" in base-60 could mean . If we use the entire collection of our alphanumerics we could ultimately serve a base-62 numeral system, but we remove two digits, uppercase "I" and uppercase "O", to reduce confusion with digits "1" and "0".
We are left with a base-60, or sexagesimal numeral system utilizing 60 of the 62 standard alphanumerics. (But see Sexagesimal system below.) In general, the number of possible values that can be represented by a digit number in base is .
The common numeral systems in computer science are binary (radix 2), octal (radix 8), and hexadecimal (radix 16). In binary only digits "0" and "1" are in the numerals. In the octal numerals, are the eight digits 0–7. Hex is 0–9 A–F, where the ten numerics retain their usual meaning, and the alphabetics correspond to values 10–15, for a total of sixteen digits. The numeral "10" is binary numeral "2", octal numeral "8", or hexadecimal numeral "16".
Radix point
The notation can be extended into the negative exponents of the base b. Thereby the so-called radix point, mostly ».«, is used as separator of the positions with non-negative from those with negative exponent.
Numbers that are not integers use places beyond the radix point. For every position behind this point (and thus after the units digit), the exponent n of the power bn decreases by 1 and the power approaches 0. For example, the number 2.35 is equal to:
Sign
If the base and all the digits in the set of digits are non-negative, negative numbers cannot be expressed. To overcome this, a minus sign, here »−«, is added to the numeral system. In the usual notation it is prepended to the string of digits representing the otherwise non-negative number.
Base conversion
The conversion to a base of an integer represented in base can be done by a succession of Euclidean divisions by the right-most digit in base is the remainder of the division of by the second right-most digit is the remainder of the division of the quotient by and so on. The left-most digit is the last quotient. In general, the th digit from the right is the remainder of the division by of the th quotient.
For example: converting A10BHex to decimal (41227):
0xA10B/10 = 0x101A R: 7 (ones place)
0x101A/10 = 0x19C R: 2 (tens place)
0x19C/10 = 0x29 R: 2 (hundreds place)
0x29/10 = 0x4 R: 1 ...
4
When converting to a larger base (such as from binary to decimal), the remainder represents as a single digit, using digits from . For example: converting 0b11111001 (binary) to 249 (decimal):
0b11111001/10 = 0b11000 R: 0b1001 (0b1001 = "9" for ones place)
0b11000/10 = 0b10 R: 0b100 (0b100 = "4" for tens)
0b10/10 = 0b0 R: 0b10 (0b10 = "2" for hundreds)
For the fractional part, conversion can be done by taking digits after the radix point (the numerator), and dividing it by the implied denominator in the target radix. Approximation may be needed due to a possibility of non-terminating digits if the reduced fraction's denominator has a prime factor other than any of the base's prime factor(s) to convert to. For example, 0.1 in decimal (1/10) is 0b1/0b1010 in binary, by dividing this in that radix, the result is 0b0.00011 (because one of the prime factors of 10 is 5). For more general fractions and bases see the algorithm for positive bases.
Alternatively, Horner's method can be used for base conversion using repeated multiplications, with the same computational complexity as repeated divisions. A number in positional notation can be thought of as a polynomial, where each digit is a coefficient. Coefficients can be larger than one digit, so an efficient way to convert bases is to convert each digit, then evaluate the polynomial via Horner's method within the target base. Converting each digit is a simple lookup table, removing the need for expensive division or modulus operations; and multiplication by x becomes right-shifting. However, other polynomial evaluation algorithms would work as well, like repeated squaring for single or sparse digits. Example:
Convert 0xA10B to 41227
A10B = (10*16^3) + (1*16^2) + (0*16^1) + (11*16^0)
Lookup table:
0x0 = 0
0x1 = 1
...
0x9 = 9
0xA = 10
0xB = 11
0xC = 12
0xD = 13
0xE = 14
0xF = 15
Therefore 0xA10B's decimal digits are 10, 1, 0, and 11.
Lay out the digits out like this. The most significant digit (10) is "dropped":
10 1 0 11 <- Digits of 0xA10B
---------------
10
Then we multiply the bottom number from the source base (16), the product is placed under the next digit of the source value, and then add:
10 1 0 11
160
---------------
10 161
Repeat until the final addition is performed:
10 1 0 11
160 2576 41216
---------------
10 161 2576 41227
and that is 41227 in decimal.
Convert 0b11111001 to 249
Lookup table:
0b0 = 0
0b1 = 1
Result:
1 1 1 1 1 0 0 1 <- Digits of 0b11111001
2 6 14 30 62 124 248
-------------------------
1 3 7 15 31 62 124 249
Terminating fractions
The numbers which have a finite representation form the semiring
More explicitly, if is a factorization of into the primes with exponents then with the non-empty set of denominators
we have
where is the group generated by the and is the so-called localization of with respect to
The denominator of an element of contains if reduced to lowest terms only prime factors out of .
This ring of all terminating fractions to base is dense in the field of rational numbers . Its completion for the usual (Archimedean) metric is the same as for , namely the real numbers . So, if then has not to be confused with , the discrete valuation ring for the prime , which is equal to with .
If divides , we have
Infinite representations
Rational numbers
The representation of non-integers can be extended to allow an infinite string of digits beyond the point. For example, 1.12112111211112 ... base-3 represents the sum of the infinite series:
Since a complete infinite string of digits cannot be explicitly written, the trailing ellipsis (...) designates the omitted digits, which may or may not follow a pattern of some kind. One common pattern is when a finite sequence of digits repeats infinitely. This is designated by drawing a vinculum across the repeating block:
This is the repeating decimal notation (to which there does not exist a single universally accepted notation or phrasing).
For base 10 it is called a repeating decimal or recurring decimal.
An irrational number has an infinite non-repeating representation in all integer bases. Whether a rational number has a finite representation or requires an infinite repeating representation depends on the base. For example, one third can be represented by:
or, with the base implied:
(see also 0.999...)
For integers p and q with gcd (p, q) = 1, the fraction p/q has a finite representation in base b if and only if each prime factor of q is also a prime factor of b.
For a given base, any number that can be represented by a finite number of digits (without using the bar notation) will have multiple representations, including one or two infinite representations:
A finite or infinite number of zeroes can be appended:
The last non-zero digit can be reduced by one and an infinite string of digits, each corresponding to one less than the base, are appended (or replace any following zero digits):
(see also 0.999...)
Irrational numbers
A (real) irrational number has an infinite non-repeating representation in all integer bases.
Examples are the non-solvable nth roots
with and , numbers which are called algebraic, or numbers like
which are transcendental. The number of transcendentals is uncountable and the sole way to write them down with a finite number of symbols is to give them a symbol or a finite sequence of symbols.
Applications
Decimal system
In the decimal (base-10) Hindu–Arabic numeral system, each position starting from the right is a higher power of 10. The first position represents 100 (1), the second position 101 (10), the third position 102 ( or 100), the fourth position 103 ( or 1000), and so on.
Fractional values are indicated by a separator, which can vary in different locations. Usually this separator is a period or full stop, or a comma. Digits to the right of it are multiplied by 10 raised to a negative power or exponent. The first position to the right of the separator indicates 10−1 (0.1), the second position 10−2 (0.01), and so on for each successive position.
As an example, the number 2674 in a base-10 numeral system is:
(2 × 103) + (6 × 102) + (7 × 101) + (4 × 100)
or
(2 × 1000) + (6 × 100) + (7 × 10) + (4 × 1).
Sexagesimal system
The sexagesimal or base-60 system was used for the integral and fractional portions of Babylonian numerals and other Mesopotamian systems, by Hellenistic astronomers using Greek numerals for the fractional portion only, and is still used for modern time and angles, but only for minutes and seconds. However, not all of these uses were positional.
Modern time separates each position by a colon or a prime symbol. For example, the time might be 10:25:59 (10 hours 25 minutes 59 seconds). Angles use similar notation. For example, an angle might be (10 degrees 25 minutes 59 seconds). In both cases, only minutes and seconds use sexagesimal notation—angular degrees can be larger than 59 (one rotation around a circle is 360°, two rotations are 720°, etc.), and both time and angles use decimal fractions of a second. This contrasts with the numbers used by Hellenistic and Renaissance astronomers, who used thirds, fourths, etc. for finer increments. Where we might write , they would have written or .
Using a digit set of digits with upper and lowercase letters allows short notation for sexagesimal numbers, e.g. 10:25:59 becomes 'ARz' (by omitting I and O, but not i and o), which is useful for use in URLs, etc., but it is not very intelligible to humans.
In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integral and fractional portions of the number and using a comma (,) to separate the positions within each portion. For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days, and the angle used in the example above would be written 10;25,59,23,31,12 degrees.
Computing
In computing, the binary (base-2), octal (base-8) and hexadecimal (base-16) bases are most commonly used. Computers, at the most basic level, deal only with sequences of conventional zeroes and ones, thus it is easier in this sense to deal with powers of two. The hexadecimal system is used as "shorthand" for binary—every 4 binary digits (bits) relate to one and only one hexadecimal digit. In hexadecimal, the six digits after 9 are denoted by A, B, C, D, E, and F (and sometimes a, b, c, d, e, and f).
The octal numbering system is also used as another way to represent binary numbers. In this case the base is 8 and therefore only digits 0, 1, 2, 3, 4, 5, 6, and 7 are used. When converting from binary to octal every 3 bits relate to one and only one octal digit.
Hexadecimal, decimal, octal, and a wide variety of other bases have been used for binary-to-text encoding, implementations of arbitrary-precision arithmetic, and other applications.
For a list of bases and their applications, see list of numeral systems.
Other bases in human language
Base-12 systems (duodecimal or dozenal) have been popular because multiplication and division are easier than in base-10, with addition and subtraction being just as easy. Twelve is a useful base because it has many factors. It is the smallest common multiple of one, two, three, four and six. There is still a special word for "dozen" in English, and by analogy with the word for 102, hundred, commerce developed a word for 122, gross. The standard 12-hour clock and common use of 12 in English units emphasize the utility of the base. In addition, prior to its conversion to decimal, the old British currency Pound Sterling (GBP) partially used base-12; there were 12 pence (d) in a shilling (s), 20 shillings in a pound (£), and therefore 240 pence in a pound. Hence the term LSD or, more properly, £sd.
The Maya civilization and other civilizations of pre-Columbian Mesoamerica used base-20 (vigesimal), as did several North American tribes (two being in southern California). Evidence of base-20 counting systems is also found in the languages of central and western Africa.
Remnants of a Gaulish base-20 system also exist in French, as seen today in the names of the numbers from 60 through 99. For example, sixty-five is soixante-cinq (literally, "sixty [and] five"), while seventy-five is soixante-quinze (literally, "sixty [and] fifteen"). Furthermore, for any number between 80 and 99, the "tens-column" number is expressed as a multiple of twenty. For example, eighty-two is quatre-vingt-deux (literally, four twenty[s] [and] two), while ninety-two is quatre-vingt-douze (literally, four twenty[s] [and] twelve). In Old French, forty was expressed as two twenties and sixty was three twenties, so that fifty-three was expressed as two twenties [and] thirteen, and so on.
In English the same base-20 counting appears in the use of "scores". Although mostly historical, it is occasionally used colloquially. Verse 10 of Psalm 90 in the King James Version of the Bible starts: "The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labour and sorrow". The Gettysburg Address starts: "Four score and seven years ago".
The Irish language also used base-20 in the past, twenty being fichid, forty dhá fhichid, sixty trí fhichid and eighty ceithre fhichid. A remnant of this system may be seen in the modern word for 40, daoichead.
The Welsh language continues to use a base-20 counting system, particularly for the age of people, dates and in common phrases. 15 is also important, with 16–19 being "one on 15", "two on 15" etc. 18 is normally "two nines". A decimal system is commonly used.
The Inuit languages use a base-20 counting system. Students from Kaktovik, Alaska invented a base-20 numeral system in 1994
Danish numerals display a similar base-20 structure.
The Māori language of New Zealand also has evidence of an underlying base-20 system as seen in the terms Te Hokowhitu a Tu referring to a war party (literally "the seven 20s of Tu") and Tama-hokotahi, referring to a great warrior ("the one man equal to 20").
The binary system was used in the Egyptian Old Kingdom, 3000 BC to 2050 BC. It was cursive by rounding off rational numbers smaller than 1 to , with a 1/64 term thrown away (the system was called the Eye of Horus).
A number of Australian Aboriginal languages employ binary or binary-like counting systems. For example, in Kala Lagaw Ya, the numbers one through six are urapon, ukasar, ukasar-urapon, ukasar-ukasar, ukasar-ukasar-urapon, ukasar-ukasar-ukasar.
North and Central American natives used base-4 (quaternary) to represent the four cardinal directions. Mesoamericans tended to add a second base-5 system to create a modified base-20 system.
A base-5 system (quinary) has been used in many cultures for counting. Plainly it is based on the number of digits on a human hand. It may also be regarded as a sub-base of other bases, such as base-10, base-20, and base-60.
A base-8 system (octal) was devised by the Yuki tribe of Northern California, who used the spaces between the fingers to count, corresponding to the digits one through eight. There is also linguistic evidence which suggests that the Bronze Age Proto-Indo Europeans (from whom most European and Indic languages descend) might have replaced a base-8 system (or a system which could only count up to 8) with a base-10 system. The evidence is that the word for 9, newm, is suggested by some to derive from the word for "new", newo-, suggesting that the number 9 had been recently invented and called the "new number".
Many ancient counting systems use five as a primary base, almost surely coming from the number of fingers on a person's hand. Often these systems are supplemented with a secondary base, sometimes ten, sometimes twenty. In some African languages the word for five is the same as "hand" or "fist" (Dyola language of Guinea-Bissau, Banda language of Central Africa). Counting continues by adding 1, 2, 3, or 4 to combinations of 5, until the secondary base is reached. In the case of twenty, this word often means "man complete". This system is referred to as quinquavigesimal. It is found in many languages of the Sudan region.
The Telefol language, spoken in Papua New Guinea, is notable for possessing a base-27 numeral system.
Non-standard positional numeral systems
Interesting properties exist when the base is not fixed or positive and when the digit symbol sets denote negative values. There are many more variations. These systems are of practical and theoretic value to computer scientists.
Balanced ternary uses a base of 3 but the digit set is instead of {0,1,2}. The "" has an equivalent value of −1. The negation of a number is easily formed by switching the on the 1s. This system can be used to solve the balance problem, which requires finding a minimal set of known counter-weights to determine an unknown weight. Weights of 1, 3, 9, ..., 3n known units can be used to determine any unknown weight up to 1 + 3 + ... + 3n units. A weight can be used on either side of the balance or not at all. Weights used on the balance pan with the unknown weight are designated with , with 1 if used on the empty pan, and with 0 if not used. If an unknown weight W is balanced with 3 (31) on its pan and 1 and 27 (30 and 33) on the other, then its weight in decimal is 25 or 101 in balanced base-3.
The factorial number system uses a varying radix, giving factorials as place values; they are related to Chinese remainder theorem and residue number system enumerations. This system effectively enumerates permutations. A derivative of this uses the Towers of Hanoi puzzle configuration as a counting system. The configuration of the towers can be put into 1-to-1 correspondence with the decimal count of the step at which the configuration occurs and vice versa.
Non-positional positions
Each position does not need to be positional itself. Babylonian sexagesimal numerals were positional, but in each position were groups of two kinds of wedges representing ones and tens (a narrow vertical wedge | for the one and an open left pointing wedge ⟨ for the ten) — up to 5+9=14 symbols per position (i.e. 5 tens ⟨⟨⟨⟨⟨ and 9 ones ||||||||| grouped into one or two near squares containing up to three tiers of symbols, or a place holder (\\) for the lack of a position). Hellenistic astronomers used one or two alphabetic Greek numerals for each position (one chosen from 5 letters representing 10–50 and/or one chosen from 9 letters representing 1–9, or a zero symbol).
| Mathematics | Basics | null |
437103 | https://en.wikipedia.org/wiki/Root%20canal | Root canal | A root canal is the naturally occurring anatomic space within the root of a tooth. It consists of the pulp chamber (within the coronal part of the tooth), the main canal(s), and more intricate anatomical branches that may connect the root canals to each other or to the surface of the root.
Structure
At the center of every tooth is a hollow area that houses soft tissues, such as the nerve, blood vessels, and connective tissue. This hollow area contains a relatively wide space in the coronal portion of the tooth called the pulp chamber. These canals run through the center of the roots, similar to the way graphite runs through a pencil. The pulp receives nutrition through the blood vessels, and sensory nerves carry signals back to the brain. A tooth can be relieved from pain if there is irreversible damage to the pulp, via root canal treatment.
Root canal anatomy consists of the pulp chamber and root canals. Both contain the dental pulp. The smaller branches, referred to as accessory canals, are most frequently found near the root end (apex), but may be encountered anywhere along the root length. The total number of root canals per tooth depends on the number of the tooth roots ranging from one to four, five or more in some cases. Sometimes there are more than one root canal per root. Some teeth have a more variable internal anatomy than others.
An unusual root canal shape, complex branching (especially the existence of horizontal branches), and multiple root canals are considered as the main causes of root canal treatment failures. (e.g. If a secondary root canal goes unnoticed by the dentist and is not cleaned and sealed, it will remain infected, causing the root canal therapy to fail).
Root canal system
The specific features and complexity of the internal anatomy of the teeth have been thoroughly studied. Using a replica technique on thousands of teeth, Walter Hess made clear as early as 1917 that the internal space of dental roots is often a complex system composed of a central area (root canals with round, oval or irregular cross-sectional shape) and lateral parts (fins, anastomoses and accessory canals). In fact, this lateral component may represent a relatively large volume, which challenges the cleaning phase of the instrumentation procedure in that tissue remnants of the vital or necrotic pulp as well as infectious elements are not easily removed in these areas. Thus, the image of root canals having a smooth, conical shape is generally too idealistic and underestimates the reach of root canal instrumentation.
Contents
The space inside the root canals is filled with a highly vascularized, loose connective tissue, called dental pulp. The dental pulp is the tissue of which the dentin portion of the tooth is composed. The dental pulp helps complete formation of the secondary teeth (adult teeth) one to two years after eruption into the mouth. The dental pulp also nourishes and hydrates the tooth structure, making the tooth more resilient, less brittle and less prone to fracture from chewing hard foods. Additionally, the dental pulp provides a hot and cold sensory function.
Variation
Root canals presenting an oval cross-section are found in 50–70% of root canals. In addition, canals with a "tear-shaped" cross section are common when a single root contains two canals (as occurs, for example, with the additional mesial root seen with the lower molars), subtleties that can be more difficult to appreciate on classical radiographs.
Recent studies have shown that use of cone-down CT scans can detect accessory canals that would have been missed in 23% of cases, which can in turn lead to apical periodontitis. The upper molars, in particular, are predisposed to have an occult accessory canal in nearly half of patients.
Clinical significance
Root canal is also a colloquial term for a dental operation, endodontic therapy, wherein the pulp is cleaned out, the space disinfected and then filled.
When rotary nickel titanium (NiTi) files are used in canals with flat-oval or tear-shaped cross sections, a circular bore is created due to the rotational action of the metal. Also, small cavities within the canal such as the buccal or lingual recesses may not be instrumented within the tooth, potentially leaving residual disease during disinfection.
Tissue or biofilm remnants along such un-instrumented recesses may lead to failure due to both inadequate disinfection and the inability to properly obturate the root-canal space. Consequently, the biofilm should be removed with a disinfectant, commonly sodium hypochlorite, during root canal treatment.
| Biology and health sciences | Dentistry | null |
437348 | https://en.wikipedia.org/wiki/Tsuga | Tsuga | Tsuga (, from Japanese (), the name of Tsuga sieboldii) is a genus of conifers in the subfamily Abietoideae of Pinaceae, the pine family. The English-language common name "hemlock" arose from a perceived similarity in the smell of its crushed foliage to that of the unrelated plant hemlock. Unlike the latter, Tsuga species are not poisonous.
The genus comprises eight to ten species (depending on the authority), with four species occurring in North America and four to six in eastern Asia.
Description
They are medium-sized to large evergreen trees, ranging from tall, with a conical to irregular crown, the latter occurring especially in some of the Asian species. The leading shoots generally droop. The bark is scaly and commonly deeply furrowed, with the colour ranging from grey to brown. The branches stem horizontally from the trunk and are usually arranged in flattened sprays that bend downward towards their tips. Short spur shoots, which are present in many gymnosperms, are weakly to moderately developed. The young twigs, as well as the distal portions of stem, are flexible and often pendent. The stems are rough with pulvini that persist after the leaves fall. The winter buds are ovoid or globose, usually rounded at the apex and not resinous.
The leaves are flattened to slightly angular and range from long and broad. They are borne singly and are arranged spirally on the stem; the leaf bases are twisted so the leaves lie flat either side of the stem or more rarely radially. Towards the base, the leaves narrow abruptly to a petiole set on a forward-angled pulvinus. The petiole is twisted at the base so it is almost parallel with the stem. The leaf apex is either notched, rounded, or acute. The undersides have two white stomatal bands (which are inconspicuous on T. mertensiana) separated by an elevated midvein. The upper surface of the leaves lack stomata, except those of T. mertensiana. They have one resin canal that is present beneath the single vascular bundle.
The pollen cones grow solitary from lateral buds. They are usually up to in length, ovoid, globose, or ellipsoid, and yellowish-white to pale purple, and borne on a short peduncle. The pollen itself has a saccate, ring-like structure at its distal pole, and rarely this structure can be more or less doubly saccate. The seed cones are borne on year-old twigs and are small ovoid-globose or oblong-cylindric, ranging from long, except in T. mertensiana, where they are cylindrical and longer, in length; they are solitary, terminal or rarely lateral, pendulous, and are sessile or on a short peduncle up to long. Maturation occurs in 5–8 months, and the seeds are shed shortly thereafter; the cones are shed soon after seed release or up to a year or two later. The seed scales are thin, leathery, and persistent. They vary in shape and lack an apophysis and an umbo. The bracts are included and small. The seeds are small, from long, and winged, with the wing being in length. They also contain small adaxial resin vesicles. Seed germination is epigeal; the seedlings have 4–6 cotyledons.
Taxonomy
Mountain hemlock (T. mertensiana) is unusual in the genus in several respects. The leaves are less flattened and arranged all round the shoot, and have stomata above as well as below, giving the foliage a glaucous colour; and the cones are the longest in the genus, long and cylindrical rather than ovoid. Some botanists treat it in a distinct genus as Hesperopeuce mertensiana (Bong.) Rydb., though it is more generally only considered distinct at the rank of subgenus.
The oldest fossils attributed to the genus are twigs, known from the Early Cretaceous of Inner Mongolia, China, though their relationship to modern Tsuga is not unambiguous. The earliest pollen attributed to the genus is known from the Upper Cretaceous of Poland, dating to around 90 million years ago. Abundant remains are only known from Eocene onwards, when the modern Tsuga crown group is thought to have begun to diversify. While formerly present in the region Tsuga became extinct in Europe during the Middle Pleistocene epoch around 780-440,000 years ago, due to unfavourable climate change caused by the ongoing Quaternary glaciation.
Another species, bristlecone hemlock, first described as T. longibracteata, is now treated in a distinct genus Nothotsuga; it differs from Tsuga in the erect (not pendulous) cones with exserted bracts, and male cones clustered in umbels, in these features more closely allied to the genus Keteleeria.
Phylogeny
The above phylogeny is in marked conflict with earlier studies, which found T. mertensiana as basal to the rest of the genus.
Species
Accepted living species
Tsuga canadensis – eastern hemlock – Eastern Canada, Eastern United States
Tsuga caroliniana – Carolina hemlock – Southern Appalachians
Tsuga chinensis – Taiwan hemlock – Taiwan, Tibet, much of China
Tsuga diversifolia – northern Japanese hemlock – Honshu, Kyushu
Tsuga dumosa – Himalayan hemlock – Himalayas, Tibet, Yunnan, Sichuan
Tsuga forrestii – Forrest's hemlock – Sichuan, Yunnan, Guizhou
Tsuga heterophylla – western hemlock – Western Canada, Northwestern United States
Tsuga × jeffreyi – British Columbia, Washington (doubtful; often treated as a variety of T. mertensiana, with no verified evidence of hybrid origin)
Tsuga mertensiana – mountain hemlock – Alaska, British Columbia, Western United States
Tsuga sieboldii – southern Japanese hemlock – Japan
Tsuga ulleungensis – Ulleungdo hemlock – Ulleung Island, Korea
Accepted paleospecies
†Tsuga aburaensis - Abura, Hokkaido (Miocene)
†Tsuga asiatica - Lawula Formation, Tibet (Priabonian)
†Tsuga europaea - Maria Mine, Alsdorf, North Rhine-Westphalia (Miocene)
†Tsuga nanfengensis - Yunnan (Late Miocene)
†Tsuga swedaea - Buchanan Lake Formation, Axel Heiberg Island (Lutetian)
†Tsuga taxoides - Inner Mongolia (Early Cretaceous)
†Tsuga xianfengensis - Yunnan (Late Miocene)
Formerly included
Moved to other genera:
Ecology
The species are all adapted to (and are confined to) relatively moist, cool temperate areas with high rainfall, cool summers, and little or no water stress; they are also adapted to cope with heavy to very heavy winter snowfall and tolerate ice storms better than most other trees. Hemlock trees are more tolerant of heavy shade than other conifers; they are, however, more susceptible to drought.
Threats
The two eastern North American species, T. canadensis and T. caroliniana, are under serious threat by the sap-sucking insect Adelges tsugae (hemlock woolly adelgid). This adelgid, related to the aphids, was introduced accidentally from eastern Asia, where it is only a minor pest. Extensive mortality has occurred, particularly east of the Appalachian Mountains. The Asian species are resistant to this pest, and the two western American hemlocks are moderately resistant. In North America, hemlocks are also attacked by hemlock looper. Larger infected hemlocks have large, relatively high root systems that can bring other trees down if one falls. The foliage of young trees is often browsed by deer, and the seeds are eaten by finches and small rodents.
Old trees are commonly attacked by various fungal disease and decay species, notably Heterobasidion annosum and Armillaria species, which rot the heartwood and eventually leave the tree liable to windthrow, and Rhizina undulata, which may kill groups of trees following minor grass fires that activate growth of the Rhizina spores.
Uses
The wood obtained from hemlocks is important in the timber industry, especially for use as wood pulp. Many species are used in horticulture, and numerous cultivars have been selected for use in gardens. The bark is also used in tanning leather. The needles are sometimes used to make a tea and perfume.
| Biology and health sciences | Pinaceae | Plants |
437461 | https://en.wikipedia.org/wiki/Megalania | Megalania | Megalania (Varanus priscus) is an extinct species of giant monitor lizard, part of the megafaunal assemblage that inhabited Australia during the Pleistocene. It is the largest terrestrial lizard known to have existed, but the fragmentary nature of known remains make estimates highly uncertain. Recent studies suggest that most known specimens would have reached around in body length excluding the tail, while some individuals would have been significantly larger, reaching sizes around in length.
Megalania is thought to have had a similar ecology to the living Komodo dragon (Varanus komodoensis) which may be its closest living relative. The youngest fossil remains of giant monitor lizards in Australia date to around 50,000 years ago. The first indigenous settlers of Australia might have encountered megalania, and been a factor in megalania's extinction. While originally megalania was considered to be the only member of the titular genus "Megalania", today it is considered a member of the genus Varanus, being closely related to other Australian monitor lizards.
Taxonomy
Sir Richard Owen described the first known remains of megalania in 1859, from three vertebrae amongst a collection of primarily marsupial bones purchased by the British Museum, collected from the bed of a tributary of the Condamine River, west of Moreton Bay in eastern Australia. The name "Megalania prisca" was coined in the paper by Owen to mean "ancient great roamer"; the name was chosen "in reference to the terrestrial nature of the great Saurian". Owen used a modification of the Greek word ἠλαίνω ēlainō ("I roam"). The close similarity to the Latin word: lania (feminine form of "butcher") has resulted in numerous taxonomic and popular descriptions of "Megalania" mistranslating the name as "ancient giant butcher." "Megalania" is no longer considered a valid genus, with many authors preferring to consider it a junior synonym of Varanus, which encompasses all living monitor lizards. The genera "Megalania" and Varanus are respectively feminine and masculine in grammatical gender and their specific names need to match them: prisca (feminine) and priscus (masculine).
Megalania is included within Varanus because its morphology suggests that it is more closely related to some species of Varanus than others, so excluding V. priscus from Varanus renders the latter genus an unnatural grouping. Ralph Molnar noted in 2004 that, even if every species of the genus Varanus were divided into groups currently designated as subgenera, V. priscus would still be classified in the genus Varanus, because this is the current subgenus name, as well as genus name, for all Australian monitors. Unless other Australian monitor species were each also classified their own exclusive genera, "Megalania" would not be a valid genus name. However, Molnar noted that "megalania" is suitable for use as a vernacular, rather than scientific name, for the species Varanus priscus.
Phylogeny
Several studies have attempted to establish the phylogenetic position of megalania within the Varanidae. An affinity with the perentie (Varanus giganteus), Australia's largest living lizard, has been suggested based on skull-roof morphology. The most recent comprehensive study proposes a sister-taxon relationship with the large Komodo dragon (Varanus komodoensis) based on neurocranial similarities, with the lace monitor (Varanus varius) as the closest living Australian relative. Conversely, the perentie is considered more closely related to Gould's monitor and the Argus monitor.
Size
The lack of complete or nearly complete fossil skeletons has made it difficult to determine the exact dimensions of megalania. Early estimates placed the length of the largest individuals at , with a maximum weight of approximately . In 2002, Stephen Wroe considerably downsized megalania, suggesting a maximum total length of and a weight of with average total lengths of and , decrying the earlier maximum length estimate of as exaggerations based on flawed methods. In 2009, however, Wroe joined other researchers in raising the estimate to at least and .
In 2003, Erickson and colleagues suggested that a large specimen with an estimated longevity of 16 years, QM F4452/3, would have belonged to an individual up to in snout-vent length based on femoral length. In a book published in 2004, Ralph Molnar determined a range of potential sizes for megalania, made by scaling up from dorsal vertebrae, after he determined a relationship between dorsal vertebrae width and snout-vent length. The average snout-vent length of known specimens were around , and such individuals would have weighed up to . The largest vertebra (QM 2942) would have belonged to an individual with a snout-vent length of and weighed up to .
In 2012, Conrad and colleagues estimated the size of megalania based on comparing two known specimens with all known species of Varanus. The authors of the study suggested that the braincase (BMNH 39965) likely belonged to an individual around in precaudal length, while the largest specimen available to them (AMNH FR 6304) suggested that this individual would have reached up to in precaudal length. They also noted that it is possible for megalania to reach over in precaudal length, given that the largest specimens of modern varanid species are larger than average individuals by 151 to 225 percent.
Palaeobiology
Megalania is the largest terrestrial lizard known to have existed. Judging from its size, it would have fed mostly upon medium- to large-sized animals, including any of the giant marsupials such as Diprotodon, along with other reptiles and small mammals, as well as birds and their eggs and chicks. It had heavily built limbs and body, a large skull complete with a small crest between the eyes, and a jaw full of serrated, blade-like teeth.
Some scientists regard with skepticism the contention that megalania was the only, or even principal, predator of the Australian Pleistocene megafauna. They note that the marsupial lion (Thylacoleo carnifex) has been implicated with the butchery of very large Pleistocene mammals, while megalania has not. In addition, they note that megalania fossils are extremely uncommon, in contrast to T. carnifex's wide distribution across Australian Pleistocene deposits. Quinkana, a genus of terrestrial crocodiles that grew up to 6 m and was present until around 40,000 years ago, has also been marked as another apex predator of Australian megafauna.
Komodo dragons, megalania's closest relative, are known to have evolved in Australia before spreading to their current range in Indonesia, as fossil evidence from Queensland has implied. If one were to reconstruct the ecosystems that existed before the arrival of the humans on Australia, reintroducing Komodo dragons as an ecological proxy of megalania to the continent has been suggested.
A study published in 2009 using Wroe's earlier size estimates and an analysis of 18 closely related lizard species estimated a sprinting speed of . This speed is comparable to that of the extant freshwater crocodile (Crocodylus johnstoni).
The scales of megalania would possibly be similar to those of their extant relatives, possessing a honeycomb microstructure and both durable and resilient to water evaporation.
Venom
Along with other varanid lizards, such as the Komodo dragon and the Nile monitor, megalania belongs to the proposed clade Toxicofera, which contains all known reptile clades possessing toxin-secreting oral glands, as well as their close venomous and nonvenomous relatives, including Iguania, Anguimorpha, and snakes. Closely related varanids use a potent venom found in glands inside the jaw. The venom in these lizards have been shown to be a haemotoxin. The venom would act as an anticoagulant and would greatly increase the bleeding the prey received from its wounds. This would rapidly decrease the prey's blood pressure and lead to systemic shock. Being a member of Anguimorpha, megalania may have been venomous and if so, would be the largest venomous vertebrate known.
Extinction
The youngest remains of the species date to the Late Pleistocene, with the youngest remains possibly referrable to the species being a large osteoderm dating to approximately 50,000 years ago from the Mount Etna Caves National Park in central-eastern Queensland. A study examined the morphology of nine closely related extant varanid lizards and then allometrically scaled and compared them to V. priscus, found that the musculature of the limbs, posture, muscular mass, and possible muscular composition of the animal would most likely have been inefficient when attempting to outrun the early human settlers who colonised Australia during that time. Considering many other species of Australian megafauna went extinct around the same time, either due to human predation or being outcompeted by them, the same can be assumed for megalania.
Confrontations between megalania and early Aboriginal Australians may have inspired tales of fearsome creatures such as the whowie.
| Biology and health sciences | Prehistoric squamates | Animals |
437573 | https://en.wikipedia.org/wiki/Black-eyed%20pea | Black-eyed pea | The black-eyed pea or black-eyed bean is a legume grown around the world for its medium-sized, edible bean. It is a subspecies of the cowpea, an Old World plant domesticated in Africa, and is sometimes simply called a cowpea.
The common commercial variety is called the California Blackeye; it is pale-colored with a prominent black spot. The American South has countless varieties, many of them heirloom, that vary in size from the small lady peas to very large ones. The color of the eye may be black, brown, red, pink, or green. All the peas are green when freshly shelled and brown or buff when dried. A popular variation of the black-eyed pea is the purple hull pea or mud-in-your-eye pea; it is usually green with a prominent purple or pink spot. The currently accepted botanical name for the black-eyed pea is Vigna unguiculata subsp. unguiculata, although previously it was classified in the genus Phaseolus. Vigna unguiculata subsp. dekindtiana is the wild relative and Vigna unguiculata subsp. sesquipedalis is the related asparagus bean. Other beans of somewhat similar appearance, such as the frijol ojo de cabra (goat's-eye bean) of northern Mexico, are sometimes incorrectly called black-eyed peas, and vice versa.
History
The Black-eyed pea originates from West Africa and has been cultivated in China and India since prehistoric times. It was grown in Virginia since the 17th century by African slaves who were brought to America along with the indigenous plants from their homelands. The crop would also eventually prove popular in Texas. The planting of crops of black-eyed peas was promoted by George Washington Carver because, as a legume, it adds nitrogen to the soil and has high nutritional value. Throughout the South, the black-eyed pea is still a widely used ingredient today in soul food and cuisines of the Southern United States. The black-eye pea is cultivated throughout the world.
Cultivation
In non-tropical climates, this heat-loving crop should be sown after all danger of frost has passed and the soil is warm. Seeds sown too early will rot before germination. Black-eyed peas are extremely drought tolerant, so excessive watering should be avoided.
The crop is relatively free of pests and disease. Root-knot nematodes can be a problem, especially if crops are not rotated. As a nitrogen-fixing legume, fertilization can exclude nitrogen three weeks after germination.
The blossom produces nectar plentifully, and large areas can be a source of honey. Because the bloom attracts a variety of pollinators, care must be taken in the application of insecticides to avoid label violations.
After planting the pea, it should start to grow after 2–5 days.
Lucky New Year food
In the Southern United States, eating black-eyed peas or Hoppin' John (a traditional soul food) on New Year's Day is thought to bring prosperity in the new year. The peas are typically cooked with a pork product for flavoring (such as bacon, fatback, ham bones, or hog jowls) and diced onion, and served with a hot chili sauce or a pepper-flavored vinegar. The traditional meal also includes cabbage, collard, turnip, or mustard greens, and ham. The peas, since they swell when cooked, symbolize prosperity; the greens symbolize money; the pork, because pigs root forward when foraging, represents positive motion. Cornbread, which represents gold, also often accompanies this meal.
Several legends exist as to the origin of this custom. Two popular explanations for the South's association with peas and good luck date back to the American Civil War. The first is associated with General William T. Sherman's march of the Union Army to the sea, during which they pillaged the Confederates' food supplies. Stories say peas and salted pork were said to have been left untouched, because of the belief that they were animal food unfit for human consumption. Southerners considered themselves lucky to be left with some supplies to help them survive the winter, and black-eyed peas evolved into a representation of good luck. One challenge to this legend is that General Sherman brought backup supplies with him including three days of animal feed and would have been unlikely to have left even animal feed untouched. In addition, the dates of the first average frost for Atlanta and Savannah, respectively, are November 13 and November 28. As Sherman's march was from November 15 to December 21, 1864, it is improbable, although possible, that the Union Army would have come across standing fields of black-eyed peas as relayed in most versions of the legend. In another Southern tradition, black-eyed peas were a symbol of emancipation for African-Americans who had previously been enslaved, and who after the Civil War were officially freed on New Year's Day.
Other Southern American traditions point to Jews of Ashkenazi and Sephardic ancestry in Southern cities and plantations eating the peas.
Culinary uses worldwide
Africa and the Middle East
In Egypt, black-eyed peas are called lobia. When cooked with onions, garlic, meat and tomato juice, and served with Egyptian rice with some pastina called shaerya mixed in, they make the most famous rice dish in Egypt.
In Jordan, Lebanon, and Syria, lobya or green black-eyed beans are cooked with onion, garlic, tomatoes, peeled and chopped, olive oil, salt and black pepper.
In Nigeria and Ghana within West Africa and the Caribbean, a traditional dish called akara or koose comprises mashed black-eyed peas with added salt, onions and/or peppers. The mixture is then fried. In Nigeria a pudding called 'moin-moin' is made from ground and mixed peas with seasoning as well as some plant proteins before it is steamed. This is served with various carbohydrate-rich foods such as pap, rice or garri.
Asia and the Pacific
In Indonesia, black-eyed peas are called or kacang tolo in the local language. They are commonly used in curry dishes such as sambal goreng, a hot and spicy red curry dish, sayur brongkos, or sayur lodeh.
The bean is commonly used across India. In the Hindi dialects of North India, black-eyed peas are called lobia (लोबीया) / rongi (रोंगी) and are cooked like daal , and served with boiled rice.
In Nepali- speaking areas of India and Nepal, it is called lobia (Nepali- लोबीया, Bodi, बोडी).
In Punjabi-speaking areas of both India and Pakistan, they are called rongi/lobia (Punjabi-ਰੌਁਗੀ/ਲੋਬੀਆ).
In Gujarat, they are called suki choli/choli (Gujarati- સુકી ચોળી/ચોળી).
In Bengali speaking parts of India (West Bengal) and Bangladesh, they are known as borboti kolai (Bengali-বরবটি কলাই).
In Odisha, they are called jhudanga/jhunga (Odia- ଝୁଡ଼ଙ୍ଗ/ଝୁଙ୍ଗ).
In Assam, they are called lesera maah (Assamese- লেচৰা মাহ).
In Goa and other Konkani speaking areas of India, lobia/black eyed beans are called bogdo /chawli (Konkani- बोग्डो/चवळी).
In Maharashtra, they are called chawli (Marathi- चवळी) and made into a curry called chawli amti or chawli usal.
In Karnataka, they are called alsande kalu (Kannada- ಅಲಸಂದೆ ಕಾಳು) and used in the preparation of huli, a popular type of curry.
In the coastal areas of southern Karnataka like South Kanara district, they are called as lathanay bitt in Tulu language (Tulu- ಲತ್ತಣೆ ಬಿತ್ತ) and are cooked in spiced coconut paste to make a saucy curry or a dry coconut curry.
In Tamil Nadu, they are called karamani/thattapayaru (Tamil- காரமணி/தட்டப்பயிறு) and used in various recipes, including being boiled and made into a salad-like sundal (often during the Ganesh Chaturthi and Navratri festivals).
In Andhra Pradesh and Telangana, they are known by the name bobbarlu/alasandalu kura (Telugu- బొబ్బర్లు/అలసందలు కూర), and are used for variety of recipes, most popularly for Vada.
In Kerala, they are called vellapayar (Malayalam-വെളളപയർ) and is a part of the Sadhya dish, Olan.
In Vietnam, black-eyed peas are used in a sweet dessert called chè đậu trắng (black-eyed peas and sticky rice with coconut milk).
Europe
In Cyprus (φρέσκο λουβί (fresko luvi)), Greece (μαυρομάτικα) and Turkey (börülce salatası), blanched black-eyed peas are eaten as salad with a dressing of olive oil, salt, lemon juice, onions and garlic.
In Portugal, black-eyed peas are served with boiled cod and potatoes, with tuna, and in salads.
The Americas
North America
"Hoppin' John", made of black-eyed peas or field peas, rice, and pork, is a traditional dish in parts of the Southern United States.
Texas caviar, another traditional dish in the American South, is made from black-eyed peas marinated in vinaigrette-style dressing and chopped garlic.
South America
In Brazil's northeastern state of Bahia, especially in the city of Salvador, black-eyed peas (named "feijão fradinho" there) are used in a traditional street food of West African cuisine origin called acarajé. The beans are peeled and mashed, and the resulting paste is made into balls and deep-fried in dendê. Acarajé is typically served split in half and stuffed with vatapá, caruru, diced green and red tomatoes, fried sun-dried shrimp and homemade hot sauce.
In the northern part of Colombia, they are used to prepare a fritter called buñuelo. The beans are immersed in water for a few hours to loosen their skins and soften them. The skins are then removed either by hand or with the help of a manual grinder. Once the skins are removed, the bean is ground or blended, and eggs are added, which produces a soft mix. The mix is fried in hot oil. It makes a nutritious breakfast meal.
In Guyana, South America, and Trinidad and Tobago, it is one of the most popular type of beans cooked with rice, the main one being red kidney beans, also referred to as red beans. It is also cooked as a snack or appetizer on its own. On New Year's Eve (referred to as Old Year's Night in Guyana and Suriname), families cook a traditional dish called cook-up rice. The dish comprises rice, black-eyed peas, and other peas and a variety of meats cooked in coconut milk and seasonings. According to tradition, cook-up rice should be the first thing consumed in the New Year for good luck. Cook-up rice is also made as an everyday dish.
Nutrition
One 100 g serving of cooked black-eyed peas contains of food energy and is an excellent source of folate and a good source of thiamine, iron, magnesium, manganese, phosphorus, and zinc. The legume is also a good source of dietary fiber (6.5 g per 100 g serving) and contains a moderate amount of numerous other vitamins and minerals (table).
| Biology and health sciences | Pulses | Plants |
437619 | https://en.wikipedia.org/wiki/Shear%20stress | Shear stress | Shear stress (often denoted by , Greek: tau) is the component of stress coplanar with a material cross section. It arises from the shear force, the component of force vector parallel to the material cross section. Normal stress, on the other hand, arises from the force vector component perpendicular to the material cross section on which it acts.
General shear stress
The formula to calculate average shear stress or force per unit area is:
where is the force applied and is the cross-sectional area.
The area involved corresponds to the material face parallel to the applied force vector, i.e., with surface normal vector perpendicular to the force.
Other forms
Wall shear stress
Wall shear stress expresses the retarding force (per unit area) from a wall in the layers of a fluid flowing next to the wall. It is defined as:where is the dynamic viscosity, is the flow velocity, and is the distance from the wall.
It is used, for example, in the description of arterial blood flow, where there is evidence that it affects the atherogenic process.
Pure
Pure shear stress is related to pure shear strain, denoted , by the equationwhere is the shear modulus of the isotropic material, given byHere, is Young's modulus and is Poisson's ratio.
Beam shear
Beam shear is defined as the internal shear stress of a beam caused by the shear force applied to the beam:where
The beam shear formula is also known as Zhuravskii shear stress formula after Dmitrii Ivanovich Zhuravskii, who derived it in 1855.
Semi-monocoque shear
Shear stresses within a semi-monocoque structure may be calculated by idealizing the cross-section of the structure into a set of stringers (carrying only axial loads) and webs (carrying only shear flows). Dividing the shear flow by the thickness of a given portion of the semi-monocoque structure yields the shear stress. Thus, the maximum shear stress will occur either in the web of maximum shear flow or minimum thickness.
Constructions in soil can also fail due to shear; e.g., the weight of an earth-filled dam or dike may cause the subsoil to collapse, like a small landslide.
Impact shear
The maximum shear stress created in a solid round bar subject to impact is given by the equationwhere
Furthermore,
,
where
Shear stress in fluids
Any real fluids (liquids and gases included) moving along a solid boundary will incur a shear stress at that boundary. The no-slip condition dictates that the speed of the fluid at the boundary (relative to the boundary) is zero; although at some height from the boundary, the flow speed must equal that of the fluid. The region between these two points is named the boundary layer. For all Newtonian fluids in laminar flow, the shear stress is proportional to the strain rate in the fluid, where the viscosity is the constant of proportionality. For non-Newtonian fluids, the viscosity is not constant. The shear stress is imparted onto the boundary as a result of this loss of velocity.
For a Newtonian fluid, the shear stress at a surface element parallel to a flat plate at the point is given bywhere
Specifically, the wall shear stress is defined asNewton's constitutive law, for any general geometry (including the flat plate above mentioned), states that shear tensor (a second-order tensor) is proportional to the flow velocity gradient (the velocity is a vector, so its gradient is a second-order tensor):The constant of proportionality is named dynamic viscosity. For an isotropic Newtonian flow, it is a scalar, while for anisotropic Newtonian flows, it can be a second-order tensor. The fundamental aspect is that for a Newtonian fluid, the dynamic viscosity is independent of flow velocity (i.e., the shear stress constitutive law is linear), while for non-Newtonian flows this is not true, and one should allow for the modificationThis no longer Newton's law but a generic tensorial identity: one can always find an expression of the viscosity as function of the flow velocity given any expression of the shear stress as function of the flow velocity. On the other hand, given a shear stress as function of the flow velocity, it represents a Newtonian flow only if it can be expressed as a constant for the gradient of the flow velocity. The constant one finds in this case is the dynamic viscosity of the flow.
Example
Considering a 2D space in Cartesian coordinates (the flow velocity components are respectively ), the shear stress matrix given byrepresents a Newtonian flow; in fact it can be expressed asi.e., an anisotropic flow with the viscosity tensorwhich is nonuniform (depends on space coordinates) and transient, but is independent of the flow velocity:This flow is therefore Newtonian. On the other hand, a flow in which the viscosity wasis non-Newtonian since the viscosity depends on flow velocity. This non-Newtonian flow is isotropic (the matrix is proportional to the identity matrix), so the viscosity is simply a scalar:
Measurement with sensors
Diverging fringe shear stress sensor
This relationship can be exploited to measure the wall shear stress. If a sensor could directly measure the gradient of the velocity profile at the wall, then multiplying by the dynamic viscosity would yield the shear stress. Such a sensor was demonstrated by A. A. Naqwi and W. C. Reynolds. The interference pattern generated by sending a beam of light through two parallel slits forms a network of linearly diverging fringes that seem to originate from the plane of the two slits (see double-slit experiment). As a particle in a fluid passes through the fringes, a receiver detects the reflection of the fringe pattern. The signal can be processed, and from the fringe angle, the height and velocity of the particle can be extrapolated. The measured value of the wall velocity gradient is independent of the fluid properties, and as a result does not require calibration. Recent advancements in the micro-optic fabrication technologies have made it possible to use integrated diffractive optical elements to fabricate diverging fringe shear stress sensors usable both in air and liquid.
Micro-pillar shear-stress sensor
A further measurement technique is that of slender wall-mounted micro-pillars made of the flexible polymer polydimethylsiloxane, which bend in reaction to the applying drag forces in the vicinity of the wall. The sensor thereby belongs to the indirect measurement principles relying on the relationship between near-wall velocity gradients and the local wall-shear stress.
Electro-diffusional method
The electro-diffusional method measures the wall shear rate in the liquid phase from microelectrodes under limiting diffusion current conditions. A potential difference between an anode of a broad surface (usually located far from the measuring area) and the small working electrode acting as a cathode leads to a fast redox reaction. The ion disappearance occurs only on the microprobe active surface, causing the development of the diffusion boundary layer, in which the fast electro-diffusion reaction rate is controlled only by diffusion. The resolution of the convective-diffusive equation in the near-wall region of the microelectrode lead to analytical solutions relying the characteristics length of the microprobes, the diffusional properties of the electrochemical solution, and the wall shear rate.
| Physical sciences | Solid mechanics | Physics |
437762 | https://en.wikipedia.org/wiki/Fagus%20sylvatica | Fagus sylvatica | Fagus sylvatica, the European beech or common beech, is a large, graceful deciduous tree in the beech family with smooth silvery-gray bark, large leaf area, and a short trunk with low branches.
Description
Fagus sylvatica is a large tree, capable of reaching heights of up to tall and trunk diameter, though more typically tall and up to trunk diameter. A 10-year-old sapling will stand about tall. Undisturbed, the European beech has a lifespan of 300 years; one tree at the Valle Cervara site was more than 500 years old—the oldest known in the northern hemisphere. In cultivated forest stands trees are normally harvested at 80–120 years of age. 30 years are needed to attain full maturity (as compared to 40 for American beech). Like most trees, its form depends on the location: in forest areas, F. sylvatica grows to over , with branches being high up on the trunk. In open locations, it will become much shorter (typically ) and more massive.
The leaves are alternate, simple, and entire or with a slightly crenate margin, long and broad, with 6–7 veins on each side of the leaf (as opposed to 7–10 veins in F. orientalis). When crenate, there is one point at each vein tip, never any points between the veins. The buds are long and slender, long and thick, but thicker, up to , where the buds include flower buds.
The leaves of beech are often not abscissed (dropped) in the autumn and instead remain on the tree until the spring. This process is called marcescence. This particularly occurs when trees are saplings or when plants are clipped as a hedge (making beech hedges attractive screens, even in winter), but it also often continues to occur on the lower branches when the tree is mature.
Small quantities of seeds may be produced around 10 years of age, but not a heavy crop until the tree is at least 30 years old. F. sylvatica male flowers are borne in the small catkins which are a hallmark of the Fagales order (beeches, chestnuts, oaks, walnuts, hickories, birches, and hornbeams). The female flowers produce beechnuts, small triangular nuts long and wide at the base; there are two nuts in each cupule, maturing in the autumn 5–6 months after pollination. Flower and seed production is particularly abundant in years following a hot, sunny and dry summer, though rarely for two years in a row.
Distribution and habitat
The European beech is the most abundant hardwood species in Austrian, German and Swiss forests. The native range extends from the north, in Sweden, Denmark, Norway, Germany, Poland, Switzerland, Bulgaria, eastern parts of Russia, Romania, through Europe to France, southern England, Spain (on the Cantabrian, Iberian and Central mountain ranges), and east to northwest Turkey, where it exhibits an interspecific cline with the oriental beech (Fagus orientalis), which replaces it further east. In the Balkans, it shows some hybridisation with oriental beech; these hybrid trees are named Fagus × taurica Popl. [Fagus moesiaca (Domin, Maly) Czecz.]. In the southern part of its range around the Mediterranean, and Sicily, it grows only in mountain forests, at altitude.
Although often regarded as native in southern England, recent evidence suggests that F. sylvatica did not arrive in England until about 4000 BC, or 2,000 years subsequent to the English Channel forming following the ice ages; it could have been an early introduction by Stone Age humans, who used the nuts for food. The beech is classified as a native in the south of England and as a non-native in the north where it is often removed from 'native' woods. Localised pollen records have been recorded in the North of England from the Iron Age by Sir Harry Godwin. Changing climatic conditions may put beech populations in southern England under increased stress and while it may not be possible to maintain the current levels of beech in some sites it is thought that conditions for beech in north-west England will remain favourable or even improve. It is often planted in Britain. Similarly, the nature of Norwegian beech populations is subject to debate. If native, they would represent the northern range of the species. However, molecular genetic analyses support the hypothesis that these populations represent intentional introduction from Denmark before and during the Viking Age. However, the beech in Vestfold and at Seim north of Bergen in Norway is now spreading naturally and regarded as native.
Though not demanding of its soil type, the European beech has several significant requirements: a humid atmosphere (precipitation well distributed throughout the year and frequent fogs) and well-drained soil (being intolerant of excessive stagnant water). It prefers moderately fertile ground, calcified or lightly acidic, therefore it is found more often on the side of a hill than at the bottom of clayey basin. It tolerates rigorous winter cold, but is sensitive to spring frost. In Norway's oceanic climate planted trees grow well north to Bodø Municipality, and produce seedlings and can spread naturally in Trondheim. In Sweden, beech trees do not grow as far north as in Norway.
A beech forest is very dark and few species of plant are able to survive there, where the sun barely reaches the ground. Young beeches prefer some shade and may grow poorly in full sunlight. In a clear-cut forest a European beech will germinate and then die of excessive dryness. Under oaks with sparse leaf cover it will quickly surpass them in height and, due to the beech's dense foliage, the oaks will die from lack of sunlight.
Ecology
The root system is shallow, even superficial, with large roots spreading out in all directions. European beech forms ectomycorrhizas with a range of fungi including many Russula species, as well as Laccaria amethystina, and with the species Ramaria flavosaponaria. Tomentella Pat. species and Cenococcum geophilum have been found in Danish and Spanish beech forests. These fungi are important in enhancing uptake of water and nutrients from the soil.
In the woodlands of southern Britain, beech is dominant over oak and elm south of a line from about north Suffolk across to Cardigan. Oak are the dominant forest trees north of this line. One of the most beautiful European beech forests called Sonian Forest (Forêt de Soignes/Zoniënwoud) is found in the southeast of Brussels, Belgium. Beech is a dominant tree species in France and constitutes about 10% of French forests. The largest virgin forests made of beech trees are Uholka-Shyrokyi Luh () in Ukraine and Izvoarele Nerei ( in one forest body) in Semenic-Cheile Carașului National Park, Romania. These habitats are the home of Europe's largest predators, (the brown bear, the grey wolf and the lynx). Many trees are older than 350 years in Izvoarele Nerei and even 500 years in Uholka-Shyrokyi Luh.
Spring leaf budding by the European beech is triggered by a combination of day length and temperature. Bud break each year is from the middle of April to the beginning of May, often with remarkable precision (within a few days). It is more precise in the north of its range than the south, and at than at sea level.
The European beech invests significantly in summer and autumn for the following spring. Conditions in summer, particularly good rainfall, determine the number of leaves included in the buds. In autumn, the tree builds the reserves that will sustain it into spring. Given good conditions, a bud can produce a shoot with ten or more leaves. The terminal bud emits a hormonal substance in the spring that halts the development of additional buds. This tendency, though very strong at the beginning of their existence, becomes weaker in older trees.
It is only after the budding that root growth of the year begins. The first roots to appear are very thin (with a diameter of less than 0.5 mm). Later, after a wave of above ground growth, thicker roots grow in a steady fashion.
Diseases and pathogens
Fagus sylvatica and other beeches are prone to false heartwood ('red heart') a condition where drought, nutrient deficient soil, branch breakage, pathogen infestation or other stressor induces formation of protection wood. False heartwood often manifests in the areas of the trunk associated with symplastless branches. As branch symplast dies, the trunk wood becomes depleted of nitrogen-containing molecules essential for life; this increases risk of catastrophic trunk failure.
As the European beech exhibits deterministic leaf and shoot development and has a larger leaf area than other European hardwood trees, it is relatively more sensitive to drought and may respond to a dry summer with pre-senescent leafdrop.
Biscogniauxia nummularia (beech tarcrust) is an ascomycete primary pathogen of beech trees, causing strip-canker and wood rot. It can be found at all times of year and is not edible.
Cultivation
European beech is a very popular ornamental tree in parks and large gardens in temperate regions of the world. In North America, they are preferred for this purpose over the native F. grandifolia, which despite its tolerance of warmer climates, is slower growing, taking an average of 10 years longer to attain maturity. The town of Brookline, Massachusetts has one of the largest, if not the largest, grove of European beech trees in the United States. The public park, called 'The Longwood Mall', was planted sometime before 1850 qualifying it as the oldest stand of European beeches in the United States.
It is frequently kept clipped to make attractive hedges.
Since the early 19th century there have been numerous cultivars of European beech made by horticultural selection, often repeatedly; they include:
copper beech or purple beech (Fagus sylvatica purpurea) – a mutation of the European beech which was first noted in 1690 in the "Possenwald" forest near the town of Sondershausen in Thuringia, Germany. It is assumed that about 99% of all copper beeches in the world are descendants of this copper beech. Its leaves are purple, in many selections turning deep spinach green by mid-summer. In the United States Charles Sprague Sargent noted the earliest appearance in a nurseryman's catalogue in 1820, but in 1859 "the finest copper beech in America... more than fifty feet high" was noted in the grounds of Thomas Ash, Esq., Throggs Neck, New York; it must have been more than forty years old at the time.
fern-leaf beech (Fagus sylvatica Heterophylla Group) – leaves deeply serrated to thread-like
dwarf beech (Fagus sylvatica Tortuosa Group) – distinctive twisted trunk and branches
weeping beech (Fagus sylvatica Pendula Group) – branches pendulous
Dawyck beech (Fagus sylvatica 'Dawyck') – fastigiate (columnar) growth – occurs in green, gold and purple forms; named after Dawyck Botanic Garden in the Scottish Borders
golden beech (Fagus sylvatica 'Zlatia') – leaves golden in spring
Cultivars
The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:-
F. sylvatica
'Dawyck'
'Dawyck Gold'
'Dawyck Purple'
'Pendula' (weeping beech)
'Riversii'
F. sylvatica var. heterophylla 'Aspleniifolia'
Uses
The nuts are eaten by humans and animals. Slightly toxic to humans if eaten in large quantities due to the tannins and alkaloids they contain, the nuts were nonetheless pressed to obtain an oil in 19th-century England that was used for cooking and in lamps. They were also ground to make flour, which could be eaten after the tannins were leached out by soaking. Additionally,
Primary Product AM 01, a smoke flavouring, is produced from Fagus sylvatica.
Timber
The wood of the European beech is used in the manufacture of numerous objects and implements. Its fine and short grain makes it an easy wood to work with, easy to soak, dye, varnish and glue. Steaming makes the wood even easier to machine. It has an excellent finish and is resistant to compression and splitting and it is stiff when flexed. Milling is sometimes difficult due to cracking. The density of the wood is per cubic metre. It is particularly well suited for minor carpentry, particularly furniture. From chairs to parquetry (flooring) and staircases, the European beech can do almost anything other than heavy structural support, so long as it is not left outdoors. Its hardness make it ideal for making wooden mallets and workbench tops. The wood rots easily if it is not protected by a tar based on a distillate of its own bark (as used in railway sleepers). It is better for paper pulp than many other broadleaved trees though is only sometimes used for this, the high cellulose content can also be spun into modal, which is used as a textile akin to cotton. The code for its use in Europe is (from FAgus SYlvatica). Common beech is also considered one of the best firewoods for fireplaces.
Gallery
| Biology and health sciences | Fagales | Plants |
437771 | https://en.wikipedia.org/wiki/Cryptomeria | Cryptomeria | Cryptomeria (literally "hidden parts") is a monotypic genus of conifer in the cypress family Cupressaceae. It includes only one species, Cryptomeria japonica (syn. Cupressus japonica L.f.). It is considered to be endemic to Japan, where it is known as . The tree is called Japanese cedar or Japanese redwood in English. It has been extensively introduced and cultivated for wood production on the Azores.
Description
Cryptomeria is a very large evergreen tree, reaching up to tall and trunk diameter, with red-brown bark which peels in vertical strips. The leaves are arranged spirally, needle-like, long; and the seed cones globular, diameter with about 20–40 scales. It is superficially similar to the related giant sequoia (Sequoiadendron giganteum), from which it can be differentiated by the longer leaves (under in the giant sequoia) and smaller cones ( in the giant sequoia), and the harder bark on the trunk (thick, soft and spongy in giant sequoia).
Endemism
Sugi has been cultivated in China for so long that it is frequently thought to be native there. Forms selected for ornament and timber production long ago in China have been described as a distinct variety Cryptomeria japonica var. sinensis (or even a distinct species, Cryptomeria fortunei), but they do not differ from the full range of variation found in the wild in Japan, and there is no definite evidence the species ever occurred wild in China. Genetic analysis of the most famous Chinese population, on Tianmu Mountain, containing trees estimated to be nearly 1000 years old, supports the hypothesis that the population originates from an introduction.
Outside of its native range, Cryptomeria was also introduced to the Azores in the mid 19th century for wood production. It is currently the most cultivated species in the archipelago, occupying over 12,698 hectares, 60% of the production forest and about 1/5 of the region's total land area.
Biology
Cryptomeria grows in forests on deep, well-drained soils subject to warm, moist conditions, and it is fast-growing under these conditions. It is intolerant of poor soils and cold, drier climates.
It is used as a food plant by the larvae of some moths of the genus Endoclita including E. auratus, E. punctimargo and E. undulifer. Sugi (and hinoki) pollen is a major cause of hay fever in Japan.
Fossil record
The earliest fossil record of Cryptomeria are descriptions based on vegetative organs of †Cryptomeria kamtschatica of the Late Eocene from Kamchatka, Russia and †Cryptomeria protojaponica and †Cryptomeria sichotensis from the Oligocene of Primorye, Russia. Several fossil leafy shoots of †Cryptomeria yunnanensis have been described from Rupelian stage strata of the Lühe Basin in Yunnan, China.
From the Neogene, Cryptomeria is well represented as seed cones, leafy shoots and wood in the fossil records of Europe and Japan. †Cryptomeria rhenana was described from the early Late Miocene to the Late Miocene of Rhein in Morsbach, Germany, from the Early and Middle Pliocene of Northern Italy, to the Middle Pliocene of Dunarobba, Italy and to the Early Pleistocene of Umbria, Italy. †Cryptomeria anglica was described from the Late Miocene of La Cerdana, Spain, to the Late Middle Miocene of Brjánslækur, Iceland and from the Late Miocene to the early Pliocene Brassington Formation of Derbyshire, England. †Cryptomeria miyataensis was described from the Late Miocene of Akita, Japan. Cryptomeria japonica was described from the Late Miocene of Georgia and from the Pliocene of Duab, Abkhazia. It has also been described from the Pliocene of Honshu, Japan, Late Pliocene of Osaka, Japan and from the Pleistocene of Kyushu, Japan.
Cultivation
Timber
Cryptomeria japonica timber is extremely fragrant, weather and insect resistant, soft, and with a low density. The timber is used for the making of staves, tubs, casks, furniture and other indoor applications. Easy to saw and season, it is favoured for light construction, boxes, veneers and plywood. Wood that has been buried turns dark green and is much valued. Resin from the tree contains cryptopimaric and phenolic acid.
The wood is pleasantly scented, reddish-pink in colour, lightweight but strong, waterproof and resistant to decay. It is favoured in Japan for all types of construction work as well as interior panelling, etc. In Darjeeling district and Sikkim in India, where it is one of the most widely growing trees, C. japonica is called Dhuppi and is favoured for its light wood, extensively used in house building.
In Japan, the coppicing method of daisugi (台杉) is sometimes used to harvest logs.
Mechanical properties
In dry air conditions, the initial density of Japanese cedar timber has been determined to be about 300–420 kg/m3.
It displays a Young's modulus of 8017 MPa, 753 MPa and 275 MPa in the longitudinal, radial and tangential direction in relation to the wood fibers.
Ornamental
Cryptomeria japonica is extensively used in forestry plantations in Japan, China and the Azores islands, and is widely cultivated as an ornamental tree in other temperate areas, including Britain, Europe, North America and eastern Himalaya regions of Nepal and India.
The cultivar 'Elegans' is notable for retaining juvenile foliage throughout its life, instead of developing normal adult foliage when one year old (see the picture with different shoots). It makes a small, shrubby tree tall. There are numerous dwarf cultivars that are widely used in rock gardens and for bonsai, including 'Tansu', 'Koshyi', 'Little Diamond', 'Yokohama' and 'Kilmacurragh.'
The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017):
C. japonica 'Bandai-sugi'
C. japonica 'Elegans Compacta'
C. japonica 'Elegans Viridis'
C. japonica 'Globosa Nana'
C. japonica 'Golden Promise'
C. japonica 'Sekkan-sugi'
Cryptomeria japonica 'Spiralis'
C. japonica 'Vilmoriniana'
Symbolism
Sugi is commonly planted around temples and shrines, with many hugely impressive trees planted centuries ago. Sargent (1894; The Forest Flora of Japan) recorded the instance of a daimyō (feudal lord) who was too poor to donate a stone lantern at the funeral of the shōgun Tokugawa Ieyasu (1543–1616) at Nikkō Tōshō-gū, but requested instead to be allowed to plant an avenue of sugi, so that "future visitors might be protected from the heat of the sun". The offer was accepted; the Cedar Avenue of Nikkō, which still exists, is over long, and "has not its equal in stately grandeur".
is a large cryptomeria tree located on Yakushima, a UNESCO World Heritage Site, in Japan. It is the oldest and largest among the old-growth cryptomeria trees on the island, and is estimated to be between 2,170 and 7,200 years old.
Cryptomeria are often described and referred to in Japanese literature. For instance, cryptomeria forests and their workers, located on the mountains north of Kyoto, are featured in Yasunari Kawabata's famous book The Old Capital.
Gallery
| Biology and health sciences | Cupressaceae | Plants |
437856 | https://en.wikipedia.org/wiki/Group%204%20element | Group 4 element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legendprimordial element
synthetic element
|}
Group 4 is the second group of transition metals in the periodic table. It contains only the four elements titanium (Ti), zirconium (Zr), hafnium (Hf), and rutherfordium (Rf). The group is also called the titanium group or titanium family after its lightest member.
As is typical for early transition metals, zirconium and hafnium have only the group oxidation state of +4 as a major one, and are quite electropositive and have a less rich coordination chemistry. Due to the effects of the lanthanide contraction, they are very similar in properties. Titanium is somewhat distinct due to its smaller size: it has a well-defined +3 state as well (although +4 is more stable).
All the group 4 elements are hard. Their inherent reactivity is completely masked due to the formation of a dense oxide layer that protects them from corrosion, as well as attack by many acids and alkalis. The first three of them occur naturally. Rutherfordium is strongly radioactive: it does not occur naturally and must be produced by artificial synthesis, but its observed and theoretically predicted properties are consistent with it being a heavier homologue of hafnium. None of them have any biological role.
History
Zircon was known as a gemstone from ancient times, but it was not known to contain a new element until the work of German chemist Martin Heinrich Klaproth in 1789. He analysed the zircon-containing mineral jargoon and found a new earth (oxide), but was unable to isolate the element from its oxide. Cornish chemist Humphry Davy also attempted to isolate this new element in 1808 through electrolysis, but failed: he gave it the name zirconium. In 1824, Swedish chemist Jöns Jakob Berzelius isolated an impure form of zirconium, obtained by heating a mixture of potassium and potassium zirconium fluoride in an iron tube.
Cornish mineralogist William Gregor first identified titanium in ilmenite sand beside a stream in Cornwall, Great Britain in the year 1791. After analyzing the sand, he determined the weakly magnetic sand to contain iron oxide and a metal oxide that he could not identify. During that same year, mineralogist Franz Joseph Muller produced the same metal oxide and could not identify it. In 1795, chemist Martin Heinrich Klaproth independently rediscovered the metal oxide in rutile from the Hungarian village Boinik. He identified the oxide containing a new element and named it for the Titans of Greek mythology. Berzelius was also the first to prepare titanium metal (albeit impurely), doing so in 1825.
The X-ray spectroscopy done by Henry Moseley in 1914 showed a direct dependency between spectral line and effective nuclear charge. This led to the nuclear charge, or atomic number of an element, being used to ascertain its place within the periodic table. With this method, Moseley determined the number of lanthanides and showed that there was a missing element with atomic number 72. This spurred chemists to look for it. Georges Urbain asserted that he found element 72 in the rare earth elements in 1907 and published his results on celtium in 1911. Neither the spectra nor the chemical behavior he claimed matched with the element found later, and therefore his claim was turned down after a long-standing controversy.
By early 1923, several physicists and chemists such as Niels Bohr and Charles Rugeley Bury suggested that element 72 should resemble zirconium and therefore was not part of the rare earth elements group. These suggestions were based on Bohr's theories of the atom, the X-ray spectroscopy of Moseley, and the chemical arguments of Friedrich Paneth. Encouraged by this, and by the reappearance in 1922 of Urbain's claims that element 72 was a rare earth element discovered in 1911, Dirk Coster and Georg von Hevesy were motivated to search for the new element in zirconium ores. Hafnium was discovered by the two in 1923 in Copenhagen, Denmark. The place where the discovery took place led to the element being named for the Latin name for "Copenhagen", Hafnia, the home town of Niels Bohr.
Hafnium was separated from zirconium through repeated recrystallization of the double ammonium or potassium fluorides by Valdemar Thal Jantzen and von Hevesy. Anton Eduard van Arkel and Jan Hendrik de Boer were the first to prepare metallic hafnium by passing hafnium tetraiodide vapor over a heated tungsten filament in 1924. The long delay between the discovery of the lightest two group 4 elements and that of hafnium was partly due to the rarity of hafnium, and partly due to the extreme similarity of zirconium and hafnium, so that all previous samples of zirconium had in reality been contaminated with hafnium without anyone knowing.
The last element of the group, rutherfordium, does not occur naturally and had to be made by synthesis. The first reported detection was by a team at the Joint Institute for Nuclear Research (JINR), which in 1964 claimed to have produced the new element by bombarding a plutonium-242 target with neon-22 ions, although this was later put into question. More conclusive evidence was obtained by researchers at the University of California, Berkeley, who synthesised element 104 in 1969 by bombarding a californium-249 target with carbon-12 ions. A controversy erupted on who had discovered the element, which each group suggesting its own name: the Dubna group named the element kurchatovium after Igor Kurchatov, while the Berkeley group named it rutherfordium after Ernest Rutherford. Eventually a joint working party of IUPAC and IUPAP, the Transfermium Working Group, decided that credit for the discovery should be shared. After various compromises were attempted, in 1997, IUPAC officially named the element rutherfordium following the American proposal.
Characteristics
Chemical
Like other groups, the members of this family show patterns in their electron configurations, especially the outermost shells, resulting in trends in chemical behavior. Most of the chemistry has been observed only for the first three members of the group; chemical properties of rutherfordium are not well-characterized, but what is known and predicted matches its position as a heavier homolog of hafnium.
Titanium, zirconium, and hafnium are reactive metals, but this is masked in the bulk form because they form a dense oxide layer that sticks to the metal and reforms even if removed. As such, the bulk metals are very resistant to chemical attack; most aqueous acids have no effect unless heated, and aqueous alkalis have no effect even when hot. Oxidizing acids such as nitric acids indeed tend to reduce reactivity as they induce the formation of this oxide layer. The exception is hydrofluoric acid, as it forms soluble fluoro complexes of the metals. When finely divided, their reactivity shows as they become pyrophoric, directly reacting with oxygen and hydrogen, and even nitrogen in the case of titanium. All three are fairly electropositive, although less so than their predecessors in group 3. The oxides TiO2, ZrO2 and HfO2 are white solids with high melting points and unreactive against most acids.
The chemistry of group 4 elements is dominated by the group oxidation state. Zirconium and hafnium are in particular extremely similar, with the most salient differences being physical rather than chemical (melting and boiling points of compounds and their solubility in solvents). This is an effect of the lanthanide contraction: the expected increase of atomic radius from the 4d to the 5d elements is wiped out by the insertion of the 4f elements before. Titanium, being smaller, is distinct from these two: its oxide is less basic than those of zirconium and hafnium, and its aqueous chemistry is more hydrolyzed. Rutherfordium should have a still more basic oxide than zirconium and hafnium.
The chemistry of all three is dominated by the +4 oxidation state, though this is too high to be well-described as totally ionic. Low oxidation states are not well-represented for zirconium and hafnium (and should be even less well-represented for rutherfordium); the +3 oxidation state of zirconium and hafnium reduces water. For titanium, this oxidation state is merely easily oxidised, forming a violet Ti3+ aqua cation in solution. The elements have a significant coordination chemistry: zirconium and hafnium are large enough to readily support the coordination number of 8. All three metals however form weak sigma bonds to carbon and because they have few d electrons, pi backbonding is not very effective either.
Physical
The trends in group 4 follow those of the other early d-block groups and reflect the addition of a filled f-shell into the core in passing from the fifth to the sixth period. All the stable members of the group are silvery refractory metals, though impurities of carbon, nitrogen, and oxygen make them brittle. They all crystallize in the hexagonal close-packed structure at room temperature, and rutherfordium is expected to do the same. At high temperatures, titanium, zirconium, and hafnium transform to a body-centered cubic structure. While they are better conductors of heat and electricity than their group 3 predecessors, they are still poor compared to most metals. This, along with the higher melting and boiling points, and enthalpies of fusion, vaporization, and atomization, reflects the extra d electron available for metallic bonding.
The table below is a summary of the key physical properties of the group 4 elements. The four question-marked values are extrapolated.
Titanium
Zirconium
Hafnium
Rutherfordium
Production
The production of the metals itself is difficult due to their reactivity. The formation of oxides, nitrides, and carbides must be avoided to yield workable metals; this is normally achieved by the Kroll process. The oxides (MO2) are reacted with coal and chlorine to form the chlorides (MCl4). The chlorides of the metals are then reacted with magnesium, yielding magnesium chloride and the metals.
Further purification is done by a chemical transport reaction developed by Anton Eduard van Arkel and Jan Hendrik de Boer. In a closed vessel, the metal reacts with iodine at temperatures above 500 °C forming metal(IV) iodide; at a tungsten filament of nearly 2000 °C the reverse reaction happens and the iodine and metal are set free. The metal forms a solid coating on the tungsten filament and the iodine can react with additional metal resulting in a steady turnover.
M + 2 I2 (low temp.) → MI4
MI4 (high temp.) → M + 2 I2
Occurrence
The abundance of the group 4 metals decreases with increase of atomic mass. Titanium is the seventh most abundant metal in Earth's crust and has an abundance of 6320 ppm, while zirconium has an abundance of 162 ppm and hafnium has only an abundance of 3 ppm.
All three stable elements occur in heavy mineral sands ore deposits, which are placer deposits formed, most usually in beach environments, by concentration due to the specific gravity of the mineral grains of erosion material from mafic and ultramafic rock. The titanium minerals are mostly anatase and rutile, and zirconium occurs in the mineral zircon. Because of the chemical similarity, up to 5% of the zirconium in zircon is replaced by hafnium. The largest producers of the group 4 elements are Australia, South Africa and Canada.
Applications
Titanium metal and its alloys have a wide range of applications, where the corrosion resistance, the heat stability and the low density (light weight) are of benefit. The foremost use of corrosion-resistant hafnium and zirconium has been in nuclear reactors. Zirconium has a very low and hafnium has a high thermal neutron-capture cross-section. Therefore, zirconium (mostly as zircaloy) is used as cladding of fuel rods in nuclear reactors, while hafnium is used in control rods for nuclear reactors, because each hafnium atom can absorb multiple neutrons.
Smaller amounts of hafnium and zirconium are used in super alloys to improve the properties of those alloys.
Biological occurrences
The group 4 elements are hard refractory metals with low aqueous solubility and low availability to the biosphere. Titanium and zirconium are relatively abundant, whereas hafnium is rare in the environment, and rutherfordium non-existent.
Titanium has no known role in any organism's biology. However, many studies suggest that titanium could be biologically active. Most titanium on Earth is stored within insoluble minerals, so it is unlikely to be a part of any biological system in spite of being potentially biologically active.
Zirconium plays no known role in any biological system, but is common in biological systems. Certain antiperspirant products use Aluminium zirconium tetrachlorohydrex gly to block sweat pores in the skin.
Hafnium plays no known role in any biological system, and has low toxicity.
Rutherfordium is synthetic, expensive, and radioactive: the most stable isotopes have half-lives under an hour. Few chemical properties and no biological functions are known.
Precautions
Titanium is non-toxic even in large doses and does not play any natural role inside the human body. An estimated quantity of 0.8 milligrams of titanium is ingested by humans each day, but most passes through without being absorbed in the tissues. It does, however, sometimes bio-accumulate in tissues that contain silica. One study indicates a possible connection between titanium and yellow nail syndrome.
Zirconium powder can cause irritation, but only contact with the eyes requires medical attention. OSHA recommendations for zirconium are 5 mg/m3 time weighted average limit and a 10 mg/m3 short-term exposure limit.
Only limited data exists on the toxicology of hafnium. Care needs to be taken when machining hafnium because it is pyrophoric—fine particles can spontaneously combust when exposed to air. Compounds that contain this metal are rarely encountered by most people. The pure metal is not considered toxic, but hafnium compounds should be handled as if they were toxic because the ionic forms of metals are normally at greatest risk for toxicity, and limited animal testing has been done for hafnium compounds.
| Physical sciences | Group 4 | Chemistry |
437861 | https://en.wikipedia.org/wiki/Atomic%20spectroscopy | Atomic spectroscopy | In physics, atomic spectroscopy is the study of the electromagnetic radiation absorbed and emitted by atoms. Since unique elements have unique emission spectra, atomic spectroscopy is applied for determination of elemental compositions. It can be divided by atomization source or by the type of spectroscopy used. In the latter case, the main division is between optical and mass spectrometry. Mass spectrometry generally gives significantly better analytical performance, but is also significantly more complex. This complexity translates into higher purchase costs, higher operational costs, more operator training, and a greater number of components that can potentially fail. Because optical spectroscopy is often less expensive and has performance adequate for many tasks, it is far more common. Atomic absorption spectrometers are one of the most commonly sold and used analytical devices.
Atomic spectroscopy
Electrons exist in energy levels (i.e. atomic orbitals) within an atom. Atomic orbitals are quantized, meaning they exist as defined values instead of being continuous (see: atomic orbitals). Electrons may move between orbitals, but in doing so they must absorb or emit energy equal to the energy difference between their atom's specific quantized orbital energy levels. In optical spectroscopy, energy absorbed to move an electron to a higher energy level (higher orbital) and/or the energy emitted as the electron moves to a lower energy level is absorbed or emitted in the form of photons (light particles). Because each element has a unique number of electrons, an atom will absorb/release energy in a pattern unique to its elemental identity (e.g. Ca, Na, etc.) and thus will absorb/emit photons in a correspondingly unique pattern. The type of atoms present in a sample, or the amount of atoms present in a sample can be deduced from measuring these changes in light wavelength and light intensity.
Atomic spectroscopy is further divided into atomic absorption spectroscopy and atomic emission spectroscopy. In atomic absorption spectroscopy, light of a predetermined wavelength is passed through a collection of atoms. If the wavelength of the source light has energy corresponding to the energy difference between two energy levels in the atoms, a portion of the light will be absorbed. The difference between the intensity of the light emitted from the source (e.g., lamp) and the light collected by the detector yields an absorbance value. This absorbance value can then be used to determine the concentration of a given element (or atoms) within the sample. The relationship between the concentration of atoms, the distance the light travels through the collection of atoms, and the portion of the light absorbed is given by the Beer–Lambert law. In atomic emission spectroscopy, the intensity of the emitted light is directly proportional to the concentration of atoms.
Ion and atom sources
Sources can be adapted in many ways, but the lists below give the general uses of a number of sources. Of these, flames are the most common due to their low cost and their simplicity. Although significantly less common, inductively-coupled plasmas, especially when used with mass spectrometers, are recognized for their outstanding analytical performance and their versatility.
For all atomic spectroscopy, a sample must be vaporized and atomized. For atomic mass spectrometry, a sample must also be ionized. Vaporization, atomization, and ionization are often, but not always, accomplished with a single source. Alternatively, one source may be used to vaporize a sample while another is used to atomize (and possibly ionize). An example of this is laser ablation inductively-coupled plasma atomic emission spectrometry, where a laser is used to vaporize a solid sample and an inductively-coupled plasma is used to atomize the vapor.
With the exception of flames and graphite furnaces, which are most commonly used for atomic absorption spectroscopy, most sources are used for atomic emission spectroscopy.
Liquid-sampling sources include flames and sparks (atom source), inductively-coupled plasma (atom and ion source), graphite furnace (atom source), microwave plasma (atom and ion source), and direct-current plasma (atom and ion source). Solid-sampling sources include lasers (atom and vapor source), glow discharge (atom and ion source), arc (atom and ion source), spark (atom and ion source), and graphite furnace (atom and vapor source). Gas-sampling sources include flame (atom source), inductively-coupled plasma (atom and ion source), microwave plasma (atom and ion source), direct-current plasma (atom and ion source), and glow discharge (atom and ion source).
Selection Rules
For any given atom, there are quantum numbers that can specify the wavefunction of that atom. Using the hydrogen atom as an example, four quantum numbers are required to fully describe the state of the system. Quantum numbers that are eigenvalues of the operators that commute with the wavefunction to describe physical aspects of the system, and are called “good” numbers because of this. Once good quantum numbers have been found for a given atomic transition, the selection rules determine what changes in quantum numbers are allowed.
The electric dipole (E1) transition of a hydrogen atom can be described with the quantum numbers l (orbital angular momentum quantum number), ml (magnetic quantum number), ms (electron spin quantum number), and n (principal quantum number). When evaluating the effect of the electric dipole moment operator μ on the wavefunction of the system, we see that all values of the eigenvalue are 0, except for when the changes in the quantum numbers follow a specific pattern.
For example in the E1 transition, unless Δ l = ± 1, Δ ml = 0 or ± 1, Δ ms = 0, and Δ n = any integer, the equation above will yield a value equal to zero and the transition would be known as a “forbidden transition”. For example, this would occur for certain cases like when Δ l = 2. In this case, the transition would not be allowed and therefore would be much weaker than an allowed transition. These specific values for the changes in quantum numbers are known as the selection rules for the allowed transitions and are shown for common transitions in the table below:
| Physical sciences | Spectroscopy | Chemistry |
437919 | https://en.wikipedia.org/wiki/Acer%20pseudoplatanus | Acer pseudoplatanus | Acer pseudoplatanus, known as the sycamore in the British Isles and as the sycamore maple in the United States, is a species of maple native to Central Europe and Western Asia. It is a large deciduous, broad-leaved tree, tolerant of wind and coastal exposure.
Although native to an area ranging from France eastward to Ukraine, northern Turkey and the Caucasus, and southward to the mountains of Italy and northern Iberia, the sycamore establishes itself easily from seed and was introduced to the British Isles by 1500. It is now naturalised there and in other parts of Europe, North America, Australia and New Zealand, where it may become an invasive species.
The sycamore can grow to a height of about and the branches form a broad, rounded crown. The bark is grey, smooth when young and later flaking in irregular patches. The leaves grow on long leafstalks and are large and palmate, with five large radiating lobes. The flowers are greenish-yellow and hang in dangling flowerheads called panicles. They produce copious amounts of pollen and nectar that are attractive to insects. The winged seeds or samaras are borne in pairs and twirl to the ground when ripe. They germinate freely in the following spring.
In its native range, the sycamore is associated with a biodiverse range of invertebrates and fungi, but these are not always present in areas to which it has been introduced. It is sometimes planted in urban areas for its value as an ornamental. It produces a hard-wearing, creamy-white close-grained timber that is used for making musical instruments, furniture, joinery, wood flooring and kitchen utensils. It also makes good firewood. The rising sap in spring has been used to extract sugar and make alcoholic and non-alcoholic drinks, and can be processed into a syrup similar to that of the sugar maple. Bees often collect the nectar to make honey.
Taxonomy and etymology
Acer pseudoplatanus was first described by the Swedish naturalist Carl Linnaeus in his Species Plantarum in 1753. It is the type species in the maple genus Acer, which is in the soapberry family Sapindaceae. Many forms and varieties have been proposed, including natural varieties such as var. macrocarpum Spach, var. microcarpum Spach, and var. tomentosum Tausch, and forms such as f. erythrocarpum (Carrière) Pax, f. purpureum (Loudon) Rehder, and f. variegatum (Weston) Rehder. These are all now considered to be synonyms of Acer pseudoplatanus L.
The specific name pseudoplatanus refers to the superficial similarity of the leaves and bark of the sycamore to those of plane trees in the genus Platanus, the prefix pseudo- (from Ancient Greek) meaning "false". However, the two genera are in different families that are only distantly related. Acer and Platanus differ in the position in which leaves are attached to the stem (alternate in Platanus, paired or opposite in Acer) and in their fruit, which are spherical clusters in Platanus and paired samaras (winged fruit) in Acer.
The common name "sycamore" was originally applied to the fig species Ficus sycomorus, the sycamore or sycomore referred to in the Bible, that is native to Africa and Southwest Asia. Other common names for the tree include false plane-tree, great maple, Scottish maple, mount maple, mock-plane, or Celtic maple.
Description
The sycamore is a large, broad-leaved deciduous tree that reaches tall at maturity, the branches forming a broad, domed crown. The bark of young trees is smooth and grey but becomes rougher with age and breaks up into scales, exposing the pale-brown-to-pinkish inner bark.
The buds are produced in opposite pairs, ovoid (approximately oval in shape) and pointed, with the bud scales (the modified leaves that enclose and protect the bud) green, edged in dark brown and with dark brown tips, . When the leaves are shed they leave horseshoe-shaped marks called leaf scars on the stem. The leaves are opposite, large, long and broad, palmate with 5 pointed lobes that are coarsely toothed or serrated. They have a leathery texture with thick veins protruding on the underside. They are dark green in colour with a paler underside. Some cultivars have purple-tinged or yellowish leaves. The leaf stalk or petiole is long, is often tinged red with no stipules or leaf-like structures at the base.
The functionally monoecious or dioecious yellow-green flowers are produced after the leaves in early summer, in May or June in the British Isles, on pendulous panicles long with about 60–100 flowers on each stalk. The fruits are paired winged seeds or samaras, the seeds in diameter, each with a wing long developed as an extension of the ovary wall. The wings are held at about right angles to each other, distinguishing them from those of A. platanoides and A. campestre, in which the wings are almost opposite, and from those of A. saccharum, in which they are almost parallel. When shed, the wing of the samara catches the wind and rotates the fruit as it falls, slowing its descent and enabling the wind to disperse it further from the parent tree. The seeds are mature in autumn about four months after pollination.
The sycamore is tetraploid (each cell having four sets of chromosomes, 2n=52), whereas A. campestre and A. platanoides are diploid (with 2 sets of chromosomes, 2n=26).
Botany
Sycamore trees produce their flowers in hanging branched clusters known as panicles that contain a variety of different flower types. Most are morphologically bisexual, with both male and female organs, but function as if they were unisexual. Some are both morphologically and functionally male, others morphologically bisexual but function as males, and still others are morphologically bisexual but function as females. All of the flower types can produce pollen, but the pollen from functionally female flowers does not germinate. All flowers produce nectar, the functionally female flowers producing it in greater volume and with a higher sugar content.
Sycamore trees are very variable across their wide range and have strategies to prevent self-pollination, which is undesirable because it limits the genetic variation of the progeny and may depress their vigour. Most inflorescences are formed of a mixture of functionally male and functionally female flowers. On any one tree, one or other of these flower types opens first and the other type opens later. Some trees may be male-starters in one year and female-starters in another. The change from one sex to the other may take place on different dates in different parts of the crown, and different trees in any one population may come into bloom over the course of several weeks, so that cross-pollination is encouraged, although self-pollination may not be completely prevented.
The sycamore may hybridise with other species in Acer section Acer, including with A. heldreichii where their natural ranges overlap and with A.velutinum. Intersectional hybrids with A. griseum (Acer section Trifoliata) are also known, in which the basal lobes of the leaf are reduced in size, making the leaves appear almost three-lobed (trifoliate).
Distribution
The sycamore is native to central and eastern Europe and western Asia. Its natural range includes Albania, Austria, Belgium, Bulgaria, Czech Republic, France, Georgia, Germany, Greece, Hungary, Italy, Lithuania, Netherlands, Poland, Portugal, Romania, southern Russia, Spain, Switzerland, East Thrace and the former Yugoslavia. Reports of it occurring in eastern Turkey have been found to refer to A. heldreichii subsp. trautvetteri.
It was probably introduced into Britain in the Tudor period by 1500 and was first recorded in the wild in 1632 in Kent. The date of its first introduction into Ireland is unclear, but the oldest specimen in Ireland is in County Cavan and dates from the seventeenth century. It was introduced into Sweden around 1770 with seeds obtained from Holland.
The lack of old native names for it has been used to demonstrate its absence in Britain before introduction in around 1487, but this is challenged by the presence of an old Scottish Gaelic name for the tree, fior chrann which suggests a longer presence in Scotland at least as far back as the Gaelic settlement at Dál Riata in the late 6th and early 7th centuries. This would make it either an archaeophyte (a naturalised tree introduced by humans before 1500) or perhaps native if it can be seen to have reached Scotland without human intervention. At the moment it is usually classified as a neophyte, a plant that is naturalised but arrived with humans on or after the year 1500. Today, the sycamore is present in 3,461 (89.7%) of hectads in Britain, more than any native tree species.
The sycamore has been introduced to suitable locations outside Europe as an attractive tree for park, street or garden. These include the United States, Canada, Australia (Victoria and Tasmania), Chile and New Zealand, Patagonia and the laurel forests of Madeira and the Azores. At the time of its introduction it was probably not appreciated that its prolific production of seeds might one day cause a problem to the landscape as it spread and out-competed native species. The tree is now considered to be an environmental weed in some parts of Australia (Yarra Ranges, Victoria) and also Mount Macedon, near Daylesford, parts of the Dandenong Ranges, where it is naturalised in the eucalypt forests. The sycamore is also scattered in north-eastern Tasmania and also at Taroona, near the Derwent River, in southern Hobart. It is considered to be an invasive species in New Zealand, Norway, and environmentally sensitive locations in the United Kingdom.
In about 1870, the sycamore was introduced into the United States, and was planted in New York state and New Jersey. It was later cultivated as a park or street tree in New England and the Mid-Atlantic states. By the early part of the 21st century, it was naturalised in fourteen states (Connecticut, Delaware, Illinois, Kentucky, Maine, Michigan, North Carolina, New Jersey, New York, Pennsylvania, Rhode Island and Washington, D.C.), and in the Canadian provinces of British Columbia, New Brunswick, Nova Scotia and Ontario. The United States Department of Agriculture considers it an invasive species.
Ecology
In its native range, the sycamore is a natural component of birch (Betula sp.), beech (Fagus sp.) and fir (Abies sp.) forests. It readily invades disturbed habitats such as forest plantations, abandoned farmland and brownfield land, railway lines and roadsides verges, hedgerows, native and semi-natural woodland. In New Zealand, it invades the high country tussock grassland. As an introduced, invasive species, it may degrade the laurel forest in Madeira and Portugal and is a potential threat to the rare endemic Madeiran orchid, Dactylorhiza foliosa.
It is tolerant of a wide range of soil types and pH, except heavy clay, and is at its best on nutrient-rich, slightly calcareous soils. The roots of the sycamore form highly specific beneficial mycorrhizal associations with the fungus Glomus hoi, which promotes phosphorus uptake from the soil. Sycamore mycorrhizas are of the internal arbuscular mycorrhizal type, in which the fungus grows within the tissues of the root and forms branched, tree-like structures within the cells of the root cortex.
The larvae of a number of species of moth use the leaves as a food source. These include the sycamore moth (Acronicta aceris), the maple prominent (Ptilodon cucullina) and the plumed prominent (Ptilophora plumigera). The horse-chestnut leaf miner (Cameraria ohridella) occasionally lays its eggs on the sycamore, although 70% of the larvae do not survive beyond the second instar. The leaves attract aphids, and also the ladybirds and hoverflies that feed on them. The flowers produce copious amounts of nectar and pollen and are attractive to bees and other insects, and the seeds are eaten by small mammals such as voles and birds. As an introduced plant, in Britain the sycamore has a relatively small associated insect fauna of about 15 species, but it does have a larger range of leafhoppers than does the native field maple.
The tree may also be attacked by the horse chestnut scale insect (Pulvinaria regalis), which sucks sap from the trunk and branches, but does not cause serious damage to the tree. Sometimes squirrels will strip the bark off branches, girdling the stem; as a result whole branches may die, leaving brown, wilted leaves.
The sycamore gall mite Eriophyes macrorhynchus produces small red galls, similar to those of the nail gall mite Eriophyes tiliae, on leaves of sycamore and field maple, Acer campestris from April onwards. Another mite, Aceria pseudoplatani causes a 'sycamore felt gall' on the underside of leaves of both sycamore and Norway maple (Acer platanoides). The sycamore aphid Drepanosiphum platanoidis sucks sap from buds and foliage, producing large quantities of sticky honeydew that contaminate foliage, cars and garden furniture beneath.
The sycamore is susceptible to sooty bark disease, caused by the fungus Cryptostroma corticale. This causes wilting of the crown and the death of branches. Rectangular patches of bark become detached exposing thick layers of black fungal spores. The fungus may be present in the heartwood without symptoms for many years, working its way towards the bark following long, hot summers. The spores are hyper-allergenic and cause a condition called maple bark stripper's disease, a hypersensitivity pneumonitis. Less serious is the fungus Rhytisma acerinum which often forms the disease known as tar spot, in which black spots with yellow margins form on the foliage. The leaves may fall prematurely but the vigour of the tree is little affected. Sycamore leaf spot, caused by the fungus Cristulariella depraedans, results in pale blotches on leaves which later dry up and fall. This disease can cause moderate leaf loss but trees are little affected in the long run.
Fungal species Coniothyrium ferrarisianum has also been isolated from leaves of Acer pseudoplatanus in Italy in 1958.
Toxicity
Horses eating seeds or emergent seedlings of A. pseudoplatanus can suffer from an often fatal condition of atypical myopathy.
Cultivation
Sycamore self-seeds very vigorously, the seeds germinating en masse in the spring so that there is little, or no, seed bank in the soil. It is readily propagated from seed in cultivation, but varieties cannot be relied on to breed true. Special cultivars such as A. pseudoplatanus 'Brilliantissimum' may be propagated by grafting. This variety is notable for the bright salmon-pink colour of the young foliage and is the only sycamore cultivar to have gained the Royal Horticultural Society's Award of Garden Merit. A rare weeping form with dangling branches, A. pseudoplatanus var. pendulum, was first sold by Knight & Perry's exotic nursery in Chelsea, England, before 1850 when the name was published by W.H. Baxter in the Supplement to Loudon's Hortus Brittanicus, but no specimens of this cultivar are known to survive.
The sycamore is noted for its tolerance of wind, urban pollution, salt spray, and low summer temperatures, which makes it a popular tree for planting in cities, along roads treated with salt in winter, and in coastal localities. It is cultivated and widely naturalised north of its native range in Northern Europe, notably in the British Isles and Scandinavia north to Tromsø, Norway (seeds can ripen as far north as Vesterålen); Reykjavík, Iceland; and Tórshavn on the Faroe Islands. It now occurs throughout the British Isles, having been probably introduced in the 16th century.
Sycamores make new growth from the stump or roots if cut down and can therefore be coppiced to produce poles and other types of small timber. Its coppice stools grow comparatively rapidly, reaching as much as in length in the first year after initial harvesting.
It is grown as a species for medium-to-large bonsai in many areas of Europe, where some fine specimens can be found.
Uses
Sycamore is planted in parks for ornamental purposes, and sometimes as a street tree, for its tolerance of air pollution makes it suitable for use in urban plantings. Owing to its tolerance to wind, it has often been planted in coastal and exposed areas as a windbreak.
It produces a hard-wearing, white or cream close-grained timber that turns golden with age. The wood can be worked and sawn in any direction and is used for making musical instruments, furniture, joinery, wood flooring and parquetry. Because it is non-staining, is used for kitchen utensils, wooden spoons, bowls, rolling pins and chopping boards. In Scotland it has traditionally been used for making fine boxes, sometimes in association with contrasting, dark-coloured laburnum wood.
Occasionally, trees produce wood with a wavy grain, greatly increasing the value for decorative veneers. The wood is a medium weight for a hardwood, weighing 630 kg per cubic metre. It is a traditional wood for use in making the backs, necks and scrolls of violins. The wood is often marketed as rippled sycamore. Whistles can be made from straight twigs when the rising sap allows the bark to be separated, and these, and sycamore branches, are used in customs associated with early May in Cornwall. The wood is used for fuel, being easy to saw and to split with an axe, producing a hot flame and good embers when burnt.
In Scotland, sycamores were once a favoured tree for hangings, because their lower branches rarely broke under the strain. Both male and female flowers produce abundant nectar, which makes a fragrant, delicately flavoured and pale-coloured honey. The nectar and copious dull yellow ochre pollen are collected by honeybees as food sources. The sap rises vigorously in the spring and like that of sugar maple can be tapped to provide a refreshing drink, as a source of sugar and to make syrup or beer.
Notable specimens
Tolpuddle Martyrs' Tree
Under this sycamore tree at Tolpuddle in Dorset, England, six agricultural labourers, known as the Tolpuddle Martyrs, formed an early trades union in 1834. They were found to have breached the Unlawful Oaths Act 1797 and were transported to Australia. The subsequent public outcry led to their release and return. The tree now has a girth of 5.9 metres (19 feet, 4 inches) and a 2005 study dated the tree to 1680. The tree is cared for by the National Trust, which has pollarded the tree in 2002 and 2014.
Corstorphine Sycamore Tree
An ancient sycamore (sometimes described as a "plane") with distinctive yellow foliage formerly stood in the village of Corstorphine, now a suburb of Edinburgh, Scotland. The tree was reputedly planted in the 15th century and is named as the form Acer pseudoplatanus f. corstorphinense Schwer. Not only was it claimed to be the "largest sycamore in Scotland" but also the scene of James Lord Forrester's murder in 1679. The tree was blown down in a storm on Boxing Day 1998, but a replacement, grown from a cutting, now stands in the churchyard of Corstorphine Kirk. The tree is commemorated in the badge of the Corstorphine Bowling Club of Edinburgh, designed in 1950 to feature the Corstorphine sycamore tree and a single horn, and redesigned in 1991 for the club's centenary.
Newbattle Abbey sycamore
The Newbattle Abbey sycamore near Dalkeith, planted in 1550, was the specimen with the earliest known planting date in Scotland. It had achieved a girth of and a height of by the time it was toppled by a gale in May 2006 at the age of 456 years.
Clonenagh Money Tree
Saint Fintan founded a monastery at Clonenagh in County Laois, Ireland, in the sixth century and it had a spring beside it. This was considered holy and was visited by pilgrims. In the nineteenth century, a Protestant land owner, annoyed at people visiting the site, filled the well in, whereupon the water started to flow into the hollow interior of a sycamore tree on the other side of the road. Filled with amazement, people hung rags on the tree and pressed coins into its trunk as votive offerings and it became known as the "Money Tree". Some years later, it fell down, but new shoots appeared from its base, and the water still welled up. It remains a place of veneration on St Fintan's day, February 17.
Sycamore Gap Tree
The Sycamore Gap Tree or Robin Hood Tree was a sycamore tree standing next to Hadrian's Wall near Crag Lough in Northumberland, England. It was located in a dramatic dip in the landscape and was a popular photographic subject, described as one of the most photographed trees in the country. It derived its alternative name from featuring in a prominent scene in the 1991 film Robin Hood: Prince of Thieves. The tree was a few hundred years old and once stood with others, but they had been removed over time, possibly to improve sightlines or for gamekeeping purposes. It was felled overnight on 28 September 2023; a police investigation was launched the following day.
| Biology and health sciences | Sapindales | Plants |
437931 | https://en.wikipedia.org/wiki/Alnus%20glutinosa | Alnus glutinosa | Alnus glutinosa, the common alder, black alder, European alder, European black alder, or just alder, is a species of tree in the family Betulaceae, native to most of Europe, southwest Asia and northern Africa. It thrives in wet locations where its association with the bacterium Frankia alni enables it to grow in poor quality soils. It is a medium-sized, short-lived tree growing to a height of up to 30 metres (98 feet). It has short-stalked rounded leaves and separate male and female flowers in the form of catkins. The small, rounded fruits are cone-like and the seeds are dispersed by wind and water.
The common alder provides food and shelter for wildlife, with a number of insects, lichens and fungi being completely dependent on the tree. It is a pioneer species, colonising vacant land and forming mixed forests as other trees appear in its wake. Eventually common alder dies out of woodlands because the seedlings need more light than is available on the forest floor. Its more usual habitat is forest edges, swamps and riverside corridors. The timber has been used in underwater foundations and for manufacture of paper and fibreboard, for smoking foods, for joinery, turnery and carving. Products of the tree have been used in ethnobotany, providing folk remedies for various ailments, and research has shown that extracts of the seeds are active against pathogenic bacteria.
Description
A. glutinosa is a tree that thrives in moist soils, and grows under favourable circumstances to a height of and exceptionally up to . Young trees have an upright habit of growth with a main axial stem but older trees develop an arched crown with crooked branches. The base of the trunk produces adventitious roots which grow down to the soil and may appear to be propping the trunk up. The bark of young trees is smooth, glossy and greenish-brown while in older trees it is dark grey and fissured. The branches are smooth and somewhat sticky, being scattered with resinous warts. The buds are purplish-brown and have short stalks. Both male and female catkins form in the autumn and remain dormant during the winter.
The leaves of the common alder are short-stalked, rounded, up to long with a slightly wedge-shaped base and a wavy, serrated margin. They have a glossy dark green upper surface and paler green underside with rusty-brown hairs in the angles of the veins. As with some other trees growing near water, the common alder keeps its leaves longer than do trees in drier situations, and the leaves remain green late into the autumn. As the Latin name glutinosa implies, the buds and young leaves are sticky with a resinous gum.
The species is monoecious and the flowers are wind-pollinated; the slender cylindrical male catkins are pendulous, reddish in colour and long; the female flowers are upright, broad and green, with short stalks. During the autumn they become dark brown to black in colour, hard, somewhat woody, and superficially similar to small conifer cones. They last through the winter and the small winged seeds are mostly scattered the following spring. The seeds are flattened reddish-brown nuts edged with webbing filled with pockets of air. This enables them to float for about a month which allows the seed to disperse widely.
Unlike some other species of tree, common alders do not produce shade leaves. The respiration rate of shaded foliage is the same as well-lit leaves but the rate of assimilation is lower. This means that as a tree in woodland grows taller, the lower branches die and soon decay, leaving a small crown and unbranched trunk.
Taxonomy
A. glutinosa was first described by Carl Linnaeus in 1753, as one of two varieties of alder (the other being Alnus incana), which he regarded as a single species Betula alnus. In 1785, Jean-Baptiste Lamarck treated it as a full species under the name Betula glutinosa. Its present scientific name is due to Joseph Gaertner, who in 1791 accepted the separation of alders from birches, and transferred the species to Alnus. The epithet glutinosa means "sticky", referring particularly to the young shoots.
Within the genus Alnus, the common alder is placed in subgenus Alnus as part of a closely related group of species including the grey alder, A. incana, with which it hybridizes to form the hybrid A. × hybrida. Both species are diploid. In 2017, populations formerly considered to be A. glutinosa were found to be separate, polyploid species: A. lusitanica, which is native to the Iberian Peninsula and Morocco, and A. rohlenae, which is native to the western part of the Balkan Peninsula.
Distribution and habitat
The common alder is native to almost the whole of continental Europe (except for both the extreme north and south) as well as the United Kingdom and Ireland. In Asia its range includes Turkey, Iran and Kazakhstan, and in Africa it is found in Tunisia, Algeria and Morocco. It is naturalised in the Azores. It has been introduced, either by accident or by intent, to Canada, the United States, Chile, South Africa, Australia and New Zealand. Its natural habitat is in moist ground near rivers, ponds and lakes but it can also grow in drier locations and sometimes occurs in mixed woodland and on forest edges. It tolerates a range of soil types and grows best at a pH of between 5.5 and 7.2. Because of its association with the nitrogen-fixing bacterium Frankia alni, it can grow in nutrient-poor soils where few other trees thrive.
European alder does not usually grow in areas where the average daily temperature is above freezing for longer than six months; its range is mainly restricted in Scandinavia, but it also habitats other regions.
It requires 500 millimeters of rain to fall annually in the southeastern boundary distribution of Eurasia. Although European alder can withstand winter temperatures as low as -54°C, winter damage causes 80% of young European alder plantings in North Carolina to die back.
Given the overwinter minimum of -18°C, early low temperatures in November and December may have caused more damage than the intense cold.
Ecology
The common alder is most noted for its symbiotic relationship with the bacterium Frankia alni, which forms nodules on the tree's roots. This bacterium absorbs nitrogen from the air and fixes it in a form available to the tree. In return, the bacterium receives carbon products produced by the tree through photosynthesis. This relationship, which improves the fertility of the soil, has established the common alder as an important pioneer species in ecological succession.
The common alder is susceptible to Phytophthora alni, a recently evolved species of oomycete plant pathogen probably of hybrid origin. This is the causal agent of phytophthora disease of alder which is causing extensive mortality of the trees in some parts of Europe. The symptoms of this infection include the death of roots and of patches of bark, dark spots near the base of the trunk, yellowing of leaves and in subsequent years, the death of branches and sometimes the whole tree. Taphrina alni is a fungal plant pathogen that causes alder tongue gall, a chemically induced distortion of female catkins. The gall develops on the maturing fruits and produces spores which are carried by the wind to other trees. This gall is believed to be harmless to the tree. Another, also harmless, gall is caused by a midge, Eriophyes inangulis, which sucks sap from the leaves forming pustules.
The common alder is important to wildlife all year round and the seeds are a useful winter food for birds. Deer, sheep, hares and rabbits feed on the tree and it provides shelter for livestock in winter. It shades the water of rivers and streams, moderating the water temperature, and this benefits fish which also find safety among its exposed roots in times of flood. The common alder is the foodplant of the larvae of a number of different butterflies and moths and is associated with over 140 species of plant-eating insect. The tree is also a host to a variety of mosses and lichens which particularly flourish in the humid moist environment of streamside trees. Some common lichens found growing on the trunk and branches include tree lungwort (Lobaria pulmonaria), Menneguzzia terebrata and Stenocybe pullatula, the last of which is restricted to alders. Some 47 species of mycorrhizal fungi have been found growing in symbiosis with the common alder, both partners benefiting from an exchange of nutrients. As well as several species of Naucoria, these symbionts include Russula alnetorum, the milkcaps Lactarius obscuratus and Lactarius cyathula, and the alder roll-rim Paxillus filamentosus, all of which grow nowhere else except in association with alders. In spring, the catkin cup Ciboria amentacea grows on fallen alder catkins.
As an introduced species, the common alder can affect the ecology of its new locality. It is a fast-growing tree and can quickly form dense woods where little light reaches the ground, and this may inhibit the growth of native plants. The presence of the nitrogen-fixing bacteria and the annual accumulation of leaf litter from the trees also alters the nutrient status of the soil. It also increases the availability of phosphorus in the ground, and the tree's dense network of roots can cause increased sedimentation in pools and waterways. It spreads easily by wind-borne seed, may be dispersed to a certain extent by birds and the woody fruits can float away from the parent tree. When the tree is felled, regrowth occurs from the stump, and logs and fallen branches can take root. In the Midwestern United States, Alnus glutinosa is a highly invasive terrestrial plant and is prohibited in Indiana. A. glutinosa is classed as an environmental weed in New Zealand.
Toxicity
Pollen from the common alder, along with that from birch and hazel, is one of the many sources of tree pollen allergy. As the pollen is often present in the atmosphere at the same time as that of birch, hazel, hornbeam and oak, and they have similar physicochemical properties, it is difficult to separate out their individual effects. In central Europe, these tree pollens are the second most common cause of allergic conditions after grass pollen.
Uses
The common alder is used as a pioneer species and to stabilise river banks, to assist in flood control, to purify water in waterlogged soils and to moderate the temperature and nutrient status of water bodies. It can be grown by itself or in mixed species plantations, and the nitrogen-rich leaves falling to the ground enrich the soil and increase the production of such trees as walnut, Douglas-fir and poplar on poor quality soils. Although the tree can live for up to 160 years, it is best felled for timber at 60 to 70 years before heart rot sets in.
On marshy ground it is important as coppice-wood, being cut near the base to encourage the production of straight poles. It is capable of enduring clipping as well as marine climatic conditions and may be cultivated as a fast-growing windbreak. In woodland natural regeneration is not possible as the seeds need sufficient nutrients, water and light to germinate. Such conditions are rarely found at the forest floor and as the forest matures, the alder trees in it die out. The species is cultivated as a specimen tree in parks and gardens, and the cultivar 'Imperialis' has gained the Royal Horticultural Society's Award of Garden Merit.
Timber
The wood is soft, white when first cut, turning to pale red; the knots are attractively mottled. The timber is not used where strength is required in the construction industry, but is used for paper-making, the manufacture of fibreboard and the production of energy. Under water the wood is very durable and is used for deep foundations of buildings. The piles beneath the Rialto in Venice, and the foundations of several medieval cathedrals are made of alder. The Roman architect Vitruvius mentioned that the timber was used in the construction of the causeways across the Ravenna marshes. The wood is used in joinery, both as solid timber and as veneer, where its grain and colour are appreciated, and it takes dye well. As the wood is soft, flexible and somewhat light, it can be easily worked as well as split. It is also valued in turnery, carving, furniture making, window frames, clogs, toys, blocks, pencils and bowls.
European (common) Alder is a tonewood commonly used in the manufacture of electric guitars. It exhibits a balanced, even tone with a good mid-midrange projection making it suitable for a wide variety of musical applications. It's relatively lightweight, easy to work and sand, accepts glue, stain, paint and finish very well and is inexpensive. All this has made it a favourite of large factories mass producing instruments. Fender has been continuously and uninterruptedly using Alder to make electric guitars since 1956.
Tanning and dyeing
The bark of the common alder has long been used in tanning and dyeing. The bark and twigs contain 16 to 20% tannic acid but their usefulness in tanning is limited by the strong accompanying colour they produce. Depending on the mordant and the methods used, various shades of brown, fawn, and yellowish-orange hues can be imparted to wool, cotton and silk. Alder bark can also be used with iron sulphate to create a black dye which can substitute for the use of sumach or galls. The Laplanders are said to chew the bark and use their saliva to dye leather. The shoots of the common alder produce a yellowish or cinnamon-coloured dye if cut early in the year. Other parts of the tree are also used in dyeing; the catkins can yield a green colour and the fresh-cut wood a pinkish-fawn colour.
Medicine
The bark of common alder has traditionally been used as an astringent, a cathartic, a hemostatic, a febrifuge, a tonic and a restorative (a substance able to restore normal health). A decoction of the bark has been used to treat swelling, inflammation and rheumatism, as an emetic, and to treat pharyngitis and sore throat. Ground up bark has been used as an ingredient in toothpaste, and the inner bark can be boiled in vinegar to provide a skin wash for treating dermatitis, lice and scabies. The leaves have been used to reduce breast discomfort in nursing mothers and folk remedies advocate the use of the leaves against various forms of cancer. Alpine farmers are said to use the leaves to alleviate rheumatism by placing a heated bag full of leaves on the affected areas. Alder leaves are consumed by cows, sheep, goats and horses though pigs refuse to eat them. According to some people, consumption of alder leaves causes blackening of the tongue and is harmful to horses.
In a research study, extracts from the seeds of the common alder have been found to be active against all the eight pathogenic bacteria against which they were tested, which included Escherichia coli and methicillin-resistant Staphylococcus aureus (MRSA). The only extract to have significant antioxidant activity was that extracted in methanol. All extracts were of low toxicity to brine shrimps. These results suggest that the seeds could be further investigated for use in the development of possible anti-MRSA drugs.
Other uses
It is also the traditional wood that is burnt to produce smoked fish and other smoked foods, though in some areas other woods are now more often used. It supplies high quality charcoal.
The leaves of this tree are sticky and if they are spread on the floor of a room, their adhesive surface is said to trap fleas.
Chemical constituents of Alnus glutinosa include hirsutanonol, oregonin, genkwanin, rhododendrin {3-(4-hydroxyphenyl)-l-methylpropyl-β-D-glucopyranoside} and (penta-2,3-dienedioic acid).
Alnus glutinosa is planted on semi-coke dumps as part of environmental restoration projects because it encourages other plants to grow.
| Biology and health sciences | Fagales | Plants |
437956 | https://en.wikipedia.org/wiki/Sweet%20chestnut | Sweet chestnut | The sweet chestnut (Castanea sativa), also known as the Spanish chestnut or just chestnut, is a species of tree in the family Fagaceae, native to Southern Europe and Asia Minor, and widely cultivated throughout the temperate world. A substantial, long-lived deciduous tree, it produces an edible seed, the chestnut, which has been used in cooking since ancient times.
Description
Castanea sativa attains a height of with a trunk often in diameter. Around 20 trees are recorded with diameters over including one in diameter at breast height. A famous ancient tree known as the Hundred Horse Chestnut in Sicily was historically recorded at in diameter (although it has split into multiple trunks above ground). The bark often has a net-shaped (retiform) pattern with deep furrows or fissures running spirally in both directions up the trunk. The trunk is mostly straight with branching starting at low heights. The oblong-lanceolate, boldly toothed leaves are long and broad.
The flowers of both sexes are borne in long, upright catkins, the male flowers in the upper part and female flowers in the lower part. In the Northern Hemisphere, they appear in late June to July, and by autumn, the female flowers develop into spiny cupules containing 3–7 brownish nuts that are shed during October. The female flowers eventually form a spiky sheath that deters predators from the seed. The sweet chestnut is naturally self incompatible, meaning that the plant cannot pollinate itself, making cross-pollination necessary. Some cultivars only produce one large seed per cupule, while others produce up to three seeds. The nut itself is composed of two skins: an external, shiny brown part, and an internal skin adhering to the fruit. Inside, there is an edible, creamy-white part developed from the cotyledons.
Sweet chestnut trees live to an age of 500 to 600 years. In cultivation they may even grow as old as 1,000 years or more.
Taxonomy
The tree is to be distinguished from the horse chestnut Aesculus hippocastanum, to which it is only distantly related. The horse chestnut bears similar looking seeds (conkers) in a similar seed case, which are not palatable to humans. Other common names include "Spanish chestnut" or "marron" (French for "chestnut"). The generic name Castanea is the old Latin name for the plant species, while the specific epithet sativa means "cultivated by humans". Some selected varieties are smaller and more compact in growth yielding earlier in life with different ripening time: the Marigoule, the Marisol and the Maraval.
Distribution and habitat
The species is native to Southern Europe and Asia Minor. It is found across the Mediterranean region, from the Caspian Sea to the Atlantic Ocean. It is thought to have survived the last ice age in several refuges in Southern Europe, on the southern coast of the Black Sea with a main centre on the southern slope of the Caucasus and in the region of north-western Syria, possibly extending into Lebanon.
The species is widely distributed throughout Europe, where in 2004 Castanea sativa was grown on of forest, of which were mainly cultivated for wood and for fruit production. In some European countries, C. sativa has only been introduced recently, for example in Slovakia or the Netherlands.
The tree requires a mild climate and adequate moisture for good growth and a good nut harvest. Its year-growth (but not the rest of the tree) is sensitive to late spring and early autumn frosts; it is also intolerant of lime. Under forest conditions, it will tolerate moderate shade well. It can live to more than 2,000 years of age in natural conditions, such as the Hundred Horse Chestnut near Mount Etna in eastern Sicily.
Ecology
The leaves provide food for some animals, including Lepidoptera such as the case-bearer moth Coleophora anatipennella and North American rose chafer Macrodactylus subspinosus.
The two major fungal pathogens of the sweet chestnut are the chestnut blight (Cryphonectria parasitica) and the ink disease caused by Phytophthora cambivora and P. cinnamomi. In North America as well as in Southern Europe Cryphonectria parasitica destroyed most of the chestnut population in the 20th century. With biological control, the population of the sweet chestnut is not threatened anymore by the chestnut blight and is regenerating. Ink disease is infesting trees mostly in humid soils, with the mycelium invading the root and resulting in wilting of the leaf. Absence of fruit formation leads to die back of the petal. The ink disease is named after the black exudates at the base of the trunk. Nowadays there are cultivars that are resistant to the ink disease. Phytophthora cambivora caused serious damage in Asia and the US, and it still continues to destroy new plantations in Europe.
Another serious pest which is difficult to control is the gall wasp (Dryocosmus kuriphylus) which was recently introduced in Southern Europe, originating in Asia.
Cultivation
History
Pollen data indicates that the first spreading of Castanea sativa due to human activity started around 2100–2050 B.C. in Anatolia, northeastern Greece and southeastern Bulgaria. Compared to other crops, the sweet chestnut was probably of relatively minor importance and distributed very heterogeneously throughout these regions. The first charcoal remains of sweet chestnut only date from around 850–950 B.C., making it very difficult to infer a precise origin history. A newer but more reliable source are the literary works of Ancient Greece, with the richest being Theophrastus's Historia plantarum, written in the third century B.C. Theophrastus focuses mainly on the use of sweet chestnut wood as timber and charcoal, only mentioning the use of the fruit once when commenting on the digestive difficulties it causes, but praising its nourishing quality. Several Greek authors wrote about medicinal properties of the sweet chestnut, specifically as a remedy against lacerations of the lips and of the oesophagus.
Similar to the introduction of grape vine and olive cultivation to the Latin world, C. sativa is thought to have been introduced during the colonisation of the Italian peninsula by the Greeks. Further clues pointing to this theory can be found in the work of Pliny the Elder, who mentions only Greek colonies in connection with sweet chestnut cultivation. Today's phylogenetic map of the sweet chestnut, while not fully understood, shows greater genetic similarity between Italian and western Anatolian C. sativa trees compared to eastern Anatolian specimen, reinforcing these findings. Nonetheless, until the end of the pre-Christian era, the spread and use of the chestnut in Italy remained limited. Carbonised sweet chestnuts were found in a Roman villa at Torre Annunziata near Naples, destroyed by the eruption of Mount Vesuvius in A.D. 79.
Clues in art and literature indicate a dislike of the sweet chestnut by the Roman aristocracy. Like Theophrastus, Latin authors are sceptical of the sweet chestnut as a fruit, and Pliny the Elder even goes as far as admiring how well nature has hidden this fruit of apparently so little value. In the beginning of the Christian era, people probably started to realize the value and versatility of sweet chestnut wood, leading to a slow spread of the cultivation of C. sativa trees, a theory that is supported by pollen data and literary sources, as well as the increased use of sweet chestnut wood as poles and in supporting structures, wood works and pier building between A.D. 100 and 600.
Increasing sweet chestnut pollen appearances in Switzerland, France, Germany and the Iberian peninsula in the first century A.D. suggests the spreading of cultivated sweet chestnut trees by the Romans. Contrary to that notion, other scientists found no indication of the Romans spreading C. sativa before the fifth century. While the husks of sweet chestnuts, dated to the third or early fourth century, have been identified from the bottom of a Roman well at Great Holts Farm, in Boreham in Essex, England; this deposit includes remains of other exotic food plants and provides no evidence that any of them originated locally. No other evidence of sweet chestnut in Roman Britain has been confirmed. Indeed, no centre of sweet chestnut cultivation outside the Italian peninsula in Roman times has been detected. Widespread use of chestnut in western Europe started in the early Middle Ages and flourished in the late Middle Ages. In the mid-seventh-century Lombard laws, a composition of one solidi is set for felling a chestnut tree (or, also, hazel, pear or apple) belonging to another person (Edictum Rothari, No. 301, 643 AD). Since the beginning of the 20th century, due to depopulation of the countryside and the abandonment of the sweet chestnut as a staple food as well as the spread of chestnut blight and ink disease, C. sativa cultivation has dramatically decreased. Nowadays, sweet chestnut production is sometimes seen at a turning point again, because the development of high-value sweet chestnut products combined with changing needs of an urban society is leading to a revival in C. sativa cultivation.
Cultivation forms
Three different cultivation systems for the sweet chestnut can be distinguished:
Coppicing: Mainly for wood extraction. Standard conditions yield 15 m3 wood per ha per year.
Selve: Fruit production from grafted trees. The trees have a short tribe and a big crown. Trees have a high density and the ground between the trees is often used as pasture.
High forest: Wood and fruit production. This cultivation form is less intensive with a yield of 4–12 dt/ha and replacement of trees every 50–80 years. The trees grow from seeds and build a dense canopy.
The field management is dependent on the cultivation system. While cleaning the soil from the leaves and pruning is the norm, the use of fertilizer, irrigation and pesticides is less common and reserved for more intensive cultivation.
Requirements
The sweet chestnut tree grows well on limestone-free, deeply weathered soil. The optimal pH value of the soil is between 4.5 and 6, and the tree cannot tolerate soil compaction. The tolerance to wet ground and to clay-rich soils is very low. It is a heat-loving tree which needs a long vegetation period. The optimal average temperature is between and in January the temperature should preferably not be below but it may tolerate temperatures as low as . Low temperature in autumn can damage the fruit. The maximal altitude is strongly dependent on the climate. In general, the climate should be similar to viticulture. Optimal precipitation is between . Before planting, seeds must be stratified at so germination can start 30–40 days later. After a year, the young trees are transplanted.
Harvest
A tree grown from seed may take 20 years or more before it bears fruits, but a grafted cultivar such as 'Marron de Lyon' or 'Paragon' may start production within five years of being planted. Both cultivars bear fruits with a single large kernel, rather than the usual two to four smaller kernels.
The fruit yield per tree is usually between , but can get as high as . Harvest time is between middle of September and middle of November. There are three harvesting techniques:
By hand: The sweet chestnuts are harvested by rake or broom, with a harvest speed of every hour depending on the soil relief. Also, the capsule makes the harvest more complicated and even painful for the worker.
By hand with nets: This technique is less time-consuming and protects the fruits from injuries. However, setting up the nets is labor intensive.
Mechanical: The fruits are collected with a machine that works similarly to a vacuum cleaner. Doing so is time-saving and economical, but it is possible that some fruits get injured, and an investment is needed. Furthermore, visual sorting is not possible.
Post-harvest treatment
The most widespread treatment before storage is water curing, a process in which the sweet chestnuts are immersed in water for nine days. The aim of this practice is to limit the main storage problems threatening the sweet chestnut: fungi development and the presence of insect worms. As an alternative to water curing, hot water treatment is also commercially used.
After water treatment, the sweet chestnuts are stored in a controlled environment with high carbon dioxide concentrations. In contrast to a cold storage system, where the fruits are stored at low temperatures in untreated air, the controlled environment method avoids flesh hardening which negatively impacts the processability of the product.
Cultivars
The ornamental cultivar Castanea sativa 'Albomarginata' has gained the Royal Horticultural Society's Award of Garden Merit.
French origin
Bouche de Betizac
Maraval
Marigoule
Marsol
Precoce Migoule
American origin
Colossal
Labor Day
Uses
The species is widely cultivated for its edible seeds (also called nuts) and for its wood.
Sweet chestnut has been listed as one of the 38 substances used to prepare Bach flower remedies, a kind of alternative medicine promoted for its supposed effect on health. However, according to Cancer Research UK, "there is no scientific evidence to prove that flower remedies can control, cure or prevent any type of disease, including cancer".
Food
The species' large genetic diversity and different cultivars are exploited for uses such as flour, boiling, roasting, drying, and sweets.
The raw nuts, though edible, have a skin which is astringent and unpleasant to eat when still moist; after drying for a time the thin skin loses its astringency but is still better removed to reach the white fruit underneath. Cooking dry in an oven or fire normally helps remove this skin. Chestnuts are traditionally roasted in their tough brown husks after removing the spiny cupules in which they grow on the tree, the husks being peeled off and discarded and the hot chestnuts dipped in salt before eating them. Roast chestnuts are traditionally sold in streets, markets and fairs by street vendors with mobile or static braziers.
The skin of raw peeled chestnuts can be relatively easily removed by quickly blanching the nuts after scoring them by a cross slit at the tufted end. Once cooked, chestnuts acquire a sweet flavor and a floury texture similar to the sweet potato. The cooked nuts can be used for stuffing poultry, as a vegetable or in nut roasts. They can also be used in confections, puddings, desserts and cakes. They are used for flour, bread making, a cereal substitute, coffee substitute, a thickener in soups and other cookery uses, as well as for fattening stock. A sugar can be extracted from them. The Corsican variety of polenta (called pulenta) is made with sweet chestnut flour. A local variety of Corsican beer also uses chestnuts. The product is sold as a sweetened paste mixed with vanilla, , sweetened or unsweetened as chestnut purée or purée de marron, and candied chestnuts as marrons glacés. In Switzerland, it is often served as Vermicelles.
Roman soldiers were given chestnut porridge before going into battle.
Leaf infusions are used in respiratory diseases and are a popular remedy for whooping cough. A hair shampoo can be made from infusing leaves and fruit husks.
Nutritional constituents
The fat content is low and dominated by unsaturated fatty acids. Sweet chestnut is a good source of starch. The energy value per 100 g (3.5 oz) of C. sativa amounts to 891 kJ (213 kcal) (table). C. sativa is characterized by high moisture content which ranges from 41% to 59%. The chestnut provides a good source of copper, phosphorus, manganese and potassium (nutrition table). Its sugar content ranges from 14% to 20% dry weight, depending on the cultivar. Fructose is mostly responsible for the sweet taste.
Effect of processing
Sweet chestnut is suited for human nutrition. Most sweet chestnut is consumed in processed form, which has an impact on the nutrient composition. Its naturally high concentration of organic acids is a key factor influencing the organoleptic characteristics of fruits and vegetables, namely flavor. Organic acids are thought to play an important role against diseases as an antioxidant. Heat appears to be the most influencing factor when it comes to decreasing the organic acid content. However, even after heating sweet chestnuts, antioxidant activity remains relatively high. On the other hand, the consumer must consider that roasting, boiling or frying has a big impact on the nutritional profile of chestnut. Vitamin C significantly decreases between 25 and 54% when boiled and 2–77 % when roasted. Nevertheless, roasted or boiled chestnuts may still be a solid vitamin C source, since 100 grams still represent about 20% of the recommended daily dietary intake.
The sugar content is also affected by the high temperatures. Four processes are decisive for the degrading process of sugar while cooking: hydrolysis of starch to oligosaccharide and monosaccharide, decomposition of sucrose to glucose and fructose, caramelization of sugars and degradation of sugars. Organic acids are also affected by high temperatures: their content decreases about 50% after frying, and 15% after boiling. Responsible for the aromatic characteristics of cooked chestnuts is the effect of degradation of saccharides, proteins and lipids, the caramelization of saccharides and the maillard reaction that is reducing sugar and amino acids.
Wood
This tree responds very well to coppicing, which is still practised in Britain, and produces a good crop of tannin-rich wood every 12 to 30 years, depending on intended use and local growth rate. The tannin renders the young growing wood durable and weather resistant for outdoor use, thus suitable for posts, fencing or stakes. The wood is of light colour, hard and strong. It is used to make furniture, barrels (sometimes used to age balsamic vinegar), and roof beams notably in southern Europe (for example in houses of the Alpujarra, Spain, in southern France and elsewhere). The timber has a density of 560 kg per cubic meter, and due to its durability in ground contact is often used for external purposes such as fencing. It is also a good fuel, though not favoured for open fires as it tends to spit.
Tannin is found in the following proportions on a 10% moisture basis: bark (6.8%), wood (13.4%), seed husks (10–13%). The leaves also contain tannin.
| Biology and health sciences | Nuts | Plants |
437961 | https://en.wikipedia.org/wiki/Electric%20ray | Electric ray | The electric rays are a group of rays, flattened cartilaginous fish with enlarged pectoral fins, composing the order Torpediniformes . They are known for being capable of producing an electric discharge, ranging from 8 to 220 volts, depending on species, used to stun prey and for defense. There are 69 species in four families.
Perhaps the best known members are those of the genus Torpedo. The torpedo undersea weapon is named after it. The name comes from the Latin , 'to be stiffened or paralyzed', from the effect on someone who touches the fish.
Description
Electric rays have a rounded pectoral disc with two moderately large rounded-angular (not pointed or hooked) dorsal fins (reduced in some Narcinidae), and a stout muscular tail with a well-developed caudal fin. The body is thick and flabby, with soft loose skin with no dermal denticles or thorns. A pair of kidney-shaped electric organs are at the base of the pectoral fins. The snout is broad, large in the Narcinidae, but reduced in all other families. The mouth, nostrils, and five pairs of gill slits are underneath the disc.
Electric rays are found from shallow coastal waters down to at least deep. They are sluggish and slow-moving, propelling themselves with their tails, not by using their pectoral fins as other rays do. They feed on invertebrates and small fish. They lie in wait for prey below the sand or other substrate, using their electricity to stun and capture it.
Relationship to humans
History of research
The electrogenic properties of electric rays have been known since antiquity, although their nature was not understood. The ancient Greeks used electric rays to numb the pain of childbirth and operations. In his dialogue Meno, Plato has the character Meno accuse Socrates of "stunning" people with his puzzling questions, in a manner similar to the way the torpedo fish stuns with electricity. Scribonius Largus, a Roman physician, recorded the use of torpedo fish for treatment of headaches and gout in his Compositiones Medicae of 46 AD.
In the 1770s the electric organs of the torpedo ray were the subject of Royal Society papers by John Walsh, and John Hunter. These appear to have influenced the thinking of Luigi Galvani and Alessandro Volta – the founders of electrophysiology and electrochemistry.
Henry Cavendish proposed that electric rays use electricity; he built an artificial ray consisting of fish shaped Leyden jars to successfully mimic their behaviour in 1773.
In folklore
The torpedo fish, or electric ray, appears continuously in premodern natural histories as a magical creature, and its ability to numb fishermen without seeming to touch them was a significant source of evidence for the belief in occult qualities in nature during the ages before the discovery of electricity as an explanatory mode.
Bioelectricity
The electric rays have specialised electric organs. Many species of rays and skates outside the family have electric organs in the tail; however, the electric ray has two large kidney-shaped electric organs on each side of its head, where current passes from the lower to the upper surface of the body. The nerves that signal the organ to discharge branch repeatedly, then attach to the lower side of each plaque in the batteries. These are composed of hexagonal columns, closely packed in a honeycomb formation. Each column consists of 500 to more than 1000 plaques of modified striated muscle, adapted from the branchial (gill arch) muscles. In marine fish, these batteries are connected as a parallel circuit, whereas freshwater batteries are arranged in series. This allows freshwater rays to transmit discharges of higher voltage, as freshwater cannot conduct electricity as well as saltwater. With such a battery, an electric ray may electrocute larger prey with a voltage of between 8 volts in some narcinids to 220 volts in Torpedo nobiliana, the Atlantic torpedo.
Systematics
The 60 or so species of electric rays are grouped into 12 genera and two families. The Narkinae are sometimes elevated to a family, the Narkidae. The torpedinids feed on large prey, which are stunned using their electric organs and swallowed whole, while the narcinids specialize on small prey on or in the bottom substrate. Both groups use electricity for defense, but it is unclear whether the narcinids use electricity in feeding.
Eschmeyer's Catalog of Fishes lists classifies the following families in the Torpediniformes:
Family Platyrhinidae D. S. Jordan, 1923 (thornbacks or fanrays)
Family Narkidae Fowler, 1934 (sleeper rays)
Family Narcinidae Gill, 1862 (electric rays)
Family Hypnidae Gill, 1862 (coffin rays)
Family Torpedinidae Henle, 1834 (torpedo electric rays or torpedo rays)
| Biology and health sciences | Batoidea | null |
438178 | https://en.wikipedia.org/wiki/Ribes | Ribes | Ribes () is a genus of about 200 known species of flowering plants, most of them native to the temperate regions of the Northern Hemisphere. The species may be known as various kinds of currants, such as redcurrants, blackcurrants, and whitecurrants, or as gooseberries, and some are cultivated for their edible fruit or as ornamental plants. Ribes is the only genus in the family Grossulariaceae.
Description
Ribes species are medium shrublike plants with marked diversity in strikingly diverse flowers and fruit. They have either palmately lobed or compound leaves, and some have thorns. The sepals of the flowers are larger than the petals, and fuse into a tube or saucer shape. The ovary is inferior, maturing into a berry with many seeds.
Taxonomy
Ribes is the single genus in the Saxifragales family Grossulariaceae. Although once included in the broader circumscription of Saxifragaceae sensu lato, it is now positioned as a sister group to Saxifragaceae sensu stricto.
Subdivision
First treated on a worldwide basis in 1907, the infrageneric classification has undergone many revisions, and even in the era of molecular phylogenetics there has been contradictory evidence. Although sometimes treated as two separate genera, Ribes and Grossularia (Berger 1924), the consensus has been to consider it as a single genus, divided into a number of subgenera, the main ones of which are subgenus Ribes (currants) and subgenus Grossularia (gooseberries), further subdivided into sections. Janczewski (1907) considered six subgenera and eleven sections. Berger's twelve subgenera based on two distinct genera (see Table 1) have subsequently been demoted to sections. Weigend (2007) elevated a number of sections to produce a taxonomy of seven subgenera; Ribes (sections Ribes, Heretiera, Berisia) Coreosma, Calobotrya (sections Calobotrya, Cerophyllum), Symphocalyx, Grossularioides, Grossularia, Parilla.
Taxonomy, according to Berger, modified by Sinnott (1985):
Subgenus Ribes L. (currants) 8 sections
Section Berisia Spach (alpine currants)
Section Calobotrya (Spach) Jancz. (ornamental currants)
Section Coreosma (Spach) Jancz. (black currants)
Section Grossularioides ( Jancz.) Rehd. (spiny or Gooseberry-stemmed currants)
Section Heritiera Jancz. (dwarf or skunk currants)
Section Parilla Jancz. (Andine or South American currants)
Section Ribes L. (red currants)
Section Symphocalyx Berland. (golden currants)
Subgenus Grossularia (Mill.) Pers. (Gooseberries) 4 sections
Section Grossularia (Mill.) Nutt.
Section Robsonia Berland.
Section Hesperia A.Berger
Section Lobbia A. Berger
Some authors continued to treat Hesperia and Lobbia as subgenera. Early molecular studies suggested that subgenus Grossularia was actually embedded within subgenus Ribes. Analysis of combined molecular datasets confirms subgenus Grossularia as a monophyletic group, with two main lineages, sect. Grossularia and another clade consisting of glabrous gooseberies, including Hesperia, Lobbia and Robsonia. Other monophyletic groups identified were Calobotrya, Parilla, Symphocalyx and Berisia. However sections Ribes, Coreosma and Heritiera were not well supported. Consequently, there is insufficient resolution to justify further taxonomic revision.
Species
There are around 200 species of Ribes. Selected species include:
Ribes alpinum
Ribes aureum
Ribes cereum
Ribes divaricatum
Ribes glandulosum
Ribes hirtellum
Ribes hudsonianum
Ribes inerme
Ribes lacustre
Ribes laurifolium
Ribes lobbii
Ribes montigenum
Ribes maximowiczii
Ribes nevadense
Ribes nigrum
Ribes oxyacanthoides
Ribes rubrum
Ribes sanguineum
Ribes speciosum
Ribes triste
Ribes uva-crispa
Distribution and habitat
Ribes is widely distributed through the Northern Hemisphere, and also extending south in the mountainous areas of South America. Species can be found in meadows or near streams.
Ecology
Currants are used as food plants by the larvae of some Lepidoptera species.
Cultivation
The genus Ribes includes the edible currants: blackcurrant, redcurrant, and white currant, as well as the European gooseberry, Ribes uva-crispa, and several hybrid varieties. It should not be confused with the dried currants used in cakes and puddings, which are from the Zante currant, a small-fruited cultivar of the grape Vitis vinifera. Ribes gives its name to the popular blackcurrant cordial Ribena.
The genus also includes the group of ornamental plants collectively known as the flowering currants, for instance, R. sanguineum.
United States
There are restrictions on growing some Ribes species in some U.S. states, as they are the main alternate host for white pine blister rust.
Uses
A number of species produce edible berries, some of which are categorized as currants and gooseberries.
Blackfoot people used blackcurrant root (Ribes hudsonianum) for the treatment of kidney diseases and menstrual and menopausal problems. The Cree used the fruit of Ribes glandulosum as a fertility enhancer to assist women in becoming pregnant.
European immigrants who settled in North America in the 18th century typically made wine from both red and white currants.
| Biology and health sciences | Saxifragales | Plants |
438261 | https://en.wikipedia.org/wiki/Ulmus%20glabra | Ulmus glabra | Ulmus glabra, the wych elm or Scots elm, has the widest range of the European elm species, from Ireland eastwards to the Ural Mountains, and from the Arctic Circle south to the mountains of the Peloponnese and Sicily, where the species reaches its southern limit in Europe; it is also found in Iran. A large deciduous tree, it is essentially a montane species, growing at elevations up to , preferring sites with moist soils and high humidity. The tree can form pure forests in Scandinavia and occurs as far north as latitude 67°N at Beiarn Municipality in Norway. It has been successfully introduced as far north as Tromsø and Alta in northern Norway (70°N). It has also been successfully introduced to Narsarsuaq, near the southern tip of Greenland (61°N).
The tree was by far the most common elm in the north and west of the British Isles and is now acknowledged as the only indisputably British native elm species. Owing to its former abundance in Scotland, the tree is occasionally known as the Scotch or Scots elm; Loch Lomond is said to be a corruption of the Gaelic Lac Leaman interpreted by some as 'Lake of the Elms', 'leaman' being the plural form of leam or lem, 'elm'.
Closely related species, such as Bergmann's elm U. bergmanniana and Manchurian elm U. laciniata, native to northeast Asia, were once sometimes included in U. glabra; another close relative is the Himalayan or Kashmir elm U. wallichiana. Conversely, Ulmus elliptica from the Caucasus, considered a species by some authorities, is often listed as a regional form of Ulmus glabra.
Etymology
The word "wych" (also spelled "witch") comes from the Old English , meaning pliant or supple, which also gives definition to wicker and weak. Jacob George Strutt's 1822 book, Sylva Britannica attests that the Wych Elm was sometimes referred to as the "Wych Hazel", a name now applied to the unrelated species Hamamelis, commonly called "wych hazels".
Classification
Subspecies
Some botanists, notably Lindquist (1931), have proposed two subspecies:
U. glabra subsp. glabra in the south of the species' range: broad leaves with short tapering base and acute lobes; trees often with a short, forked trunk and a low, broad crown;
U. glabra subsp. montana (Stokes) Lindqvist in the north of the species' range (northern Britain, Scandinavia): leaves narrower, with a long tapering base and without acute lobes; trees commonly with a long single trunk and a tall, narrow crown.
Much overlap is seen between populations with these characters, and the distinction may owe to environmental influence, rather than genetic variation; the subspecies are not accepted by Flora Europaea.
Description
The type sometimes reaches heights of , typically with a broad crown where open-grown, supported by a short bole up to diameter at breast height (DBH). Normally, root suckers are not seen; natural reproduction is by seed alone. The tree is notable for its very tough, supple young shoots, which are always without the corky ridges or 'wings' characteristic of many elms. The alternate leaves are deciduous, 6–17 cm long by 3–12 cm broad, usually obovate with an asymmetric base, the lobe often completely covering the short (<5 mm) petiole; the upper surface is rough. Leaves on juvenile or shade-grown shoots sometimes have three or more lobes near the apex. The perfect hermaphrodite flowers appear before the leaves in early spring, produced in clusters of 10–20; they are 4 mm across on 10 mm long stems, and being wind-pollinated, are apetalous. The fruit is a winged samara 20 mm long and 15 mm broad, with a single, round, 6 mm seed in the centre, maturing in late spring.
Pests and diseases
While the species is highly susceptible to Dutch elm disease, it is less favoured as a host by the elm bark beetles, which act as vectors. Research in Spain has indicated the presence of a triterpene, alnulin, rendering the tree bark less attractive to the beetle than the field elm, though at 87 μg/g dried bark, its concentration is not as effective as in Ulmus laevis (200 μg/g). Moreover, once the tree is dying, its bark is quickly colonized by the fungus Phoma, which radically reduces the amount of bark available for the beetle to breed on. In European trials, clones of apparently resistant trees were inoculated with the pathogen, causing 85 – 100% wilting, resulting in 68% mortality by the following year. DNA analysis by Cemagref (now Irstea) in France has determined the genetic diversity within the species is very limited, making the chances of a resistant tree evolving rather remote.
A 300-year-old example growing in Grenzhammer, Ilmenau has allegedly been scientifically proven to be resistant to Dutch elm disease.
The Swedish Forest Tree Breeding Association at Källstorp produced triploid and tetraploid forms of the tree, but these proved no more resistant to Dutch elm disease than the normal diploid form.
In trials conducted in Italy, the tree was found to have a slight to moderate susceptibility to elm yellows, and a high susceptibility to the elm leaf beetle Xanthogaleruca luteola.
Cultivation
The wych elm is moderately shade-tolerant, but requires deep, rich soils as typically found along river valleys. The species is intolerant of acid soils and flooding, as it is of prolonged drought. Although rarely used as a street tree owing to its shape, it can be surprisingly tolerant of urban air pollution, constricted growing conditions, and severe pollarding.
As wych elm does not sucker from the roots, and any seedlings are often consumed by uncontrolled deer populations, regeneration is very restricted, limited to sprouts from the stumps of young trees. The resultant decline has been extreme, and the wych elm is now uncommon over much of its former range. It is best propagated from seed or by layering stooled stock plants, although softwood cuttings taken in early June will root fairly reliably under mist.
Wych elm was widely planted in Edinburgh in the 19th century as a park and avenue tree, and despite losses, it remains abundant there, regenerating through seedlings. It was introduced to New England in the 18th century, to Canada (as U. montana at the Dominion Arboretum, Ottawa) and Australia in the 19th century.
Uses
Lumber
Wych elm wood is prized by craftsmen for its colouring, its striking grain, its 'partridge-breast' or 'catspaw' markings, and when worked, its occasional iridescent greenish sheen or 'bloom'. The bosses on old trees produce the characteristic fissures and markings of 'burr elm' wood. Bosses fringed with shoots are burrs, whereas unfringed bosses are burls.
Medicine
In 18th century France, the inner bark of Ulmus glabra, orme pyramidale, had a brief reputation as a panacea;
"it was taken as a powder, as an extract, as an elixir, even in baths. It was good for the nerves, the chest, the stomach — what can I say? — it was a true panacea." It was this so-called "pyramidal elm bark" about which Michel-Philippe Bouvart famously quipped "Take it, Madame... and hurry up while it [still] cures." It still appeared in a pharmacopeia of 1893.
Notable trees
Possibly the oldest wych elm in Europe grew at Beauly Priory in Inverness-shire, Scotland; the tree succumbed to DED in 2022 and collapsed the following year. The priory was founded circa 1230, the tree already in existence.
The UK Champion listed in the Tree Register of the British Isles was at Brahan in the Scottish Highlands (died 2021); it had a girth of 703 cm (2.23 m DBH) and a height of 24 m. Possibly the oldest specimen in England was found in 2018 in a field north of Hopton Castle in Shropshire. Coppiced long ago, its bole girth measured 6.3 m in 2018. The oldest specimen in Edinburgh is believed to be the tree (girth 5.2 m) in the former grounds of Duddingston House, now Duddingston Golf Course. Other notable specimens in Edinburgh are to be found in Learmonth Gardens and The Meadows.
In Europe, a large tree planted in 1620 grows at Bergemolo, 5 km south of Demonte in Piedmont, Italy (bole-girth 6.2 m, 2.0 m DBH, height 26 m., 2008). Other ancient specimens grow at Styria, in Austria, and at Grenzhammer, Germany (see Gallery). In 1998, over 700 healthy, mature trees were discovered on the upper slopes of Mount Šimonka in Slovakia, but they are believed to have survived courtesy of their isolation from disease-carrying beetles rather than any innate resistance; 50 clones of these trees were presented to the Prince of Wales for planting at his Highgrove Estate, and at Clapham, Yorkshire.
In literature
E. M. Forster cites a particular wych elm, one that grew at his childhood home of Rooks Nest, Stevenage, Hertfordshire, 16 times in his novel Howards End. This tree overhangs the house of the title and is said to have a "...girth that a dozen men could not have spanned..." Forster describes the tree as "...a comrade, bending over the house, strength and adventure in its roots." The wych elm of the novel had pigs' teeth embedded in the trunk by country people long ago and it was said that chewing some of the bark could cure toothache. In keeping with the novel's epigraph, "Only connect...", the wych elm may be seen by some as a symbol of the connection of humans to the earth. Margaret Schlegel, the novel's protagonist, fears that any "....westerly gale might blow the wych elm down and bring the end of all things..." The tree is changed to a chestnut in the 1991 film adaptation of Howards End.
Cultivars
About 40 cultivars have been raised, although at least 30 are now probably lost to cultivation as a consequence of Dutch elm disease and/or other factors:
NB: 'Exoniensis', Exeter Elm, has traditionally been classified as a form of U. glabra, but its identity is now a matter of contention.
Hybrids and hybrid cultivars
U. glabra hybridises naturally with U. minor, producing elms of the Ulmus × hollandica group, from which have arisen a number of cultivars:
However, hybrids of U. glabra and U. pumila, the Siberian elm, have not been observed in the field and only achieved in the laboratory, though the ranges of the two species, the latter introduced by man, overlap in parts of Southern Europe, notably Spain. A crossing in Russia of U. glabra and U. pumila produced the hybrid named Ulmus × arbuscula; a similar crossing was cloned ('FL025') by the Istituto per la Protezione delle Piante (IPP), Florence, as part of the Italian elm breeding programme circa 2000.
Hybrids with U. glabra in their ancestry have featured strongly in recent artificial hybridization experiments in Europe, notably at Wageningen in the Netherlands, and a number of hybrid cultivars have been commercially released since 1960. The earlier trees were raised in response to the initial Dutch elm disease pandemic that afflicted Europe after the First World War, and were to prove vulnerable to the much more virulent strain of the disease that arrived in the late 1960s. However, further research eventually produced several trees effectively immune to disease, which were released after 1989.
Arno, Clusius, Columella, Commelin, Den Haag, Dodoens, Groeneveld, Homestead, Lobel, Nanguen = , Pioneer, Plinio, Regal, San Zanobi, Urban, Wanoux = .
Accessions
North America
Arnold Arboretum, US. Acc. no. 391–2001, wild collected in Georgia
Bartlett Tree Experts, US. Acc. nos. 1505, 5103, origin undisclosed
Dawes Arboretum , US. 6 trees, no acc. details available
Missouri Botanical Garden , US. Acc. nos. 1969–6164, 1986–0160
Morton Arboretum, US. Acc. nos. 591–54, 255–81, and by its synonym U. sukaczevii, acc. nos. 949–73, 181–76
Europe
[Held in nearly all arboreta]
Australasia
Eastwoodhill Arboretum , Gisborne, New Zealand. 8 trees, details not known.
In art
| Biology and health sciences | Rosales | Plants |
438266 | https://en.wikipedia.org/wiki/Ulmaceae | Ulmaceae | The Ulmaceae () are a family of flowering plants that includes the elms (genus Ulmus), and the zelkovas (genus Zelkova). Members of the family are widely distributed throughout the north temperate zone, and have a scattered distribution elsewhere except for Australasia.
The family was formerly sometimes treated to include the hackberries, (Celtis and allies), but an analysis by the Angiosperm Phylogeny Group suggests that these genera are better placed in the related family Cannabaceae. It generally is considered to include ca 7 genera and about 45 species. Some classifications also include the genus Ampelocera.
Description
The family is a group of evergreen or deciduous trees and shrubs with mucilaginous substances in leaf and bark tissue. Leaves are usually alternate on the stems. The leaf blades are simple (not compound), with entire (smooth) or variously toothed margins, and often have an asymmetrical base. The flowers are small and either bisexual or unisexual. The fruit is an indehiscent samara, nut, or drupe.
Uses
Ulmus provides important timber trees mostly for furniture.
Phylogeny
Modern molecular phylogenetics suggest the following relationships:
| Biology and health sciences | Rosales | Plants |
438476 | https://en.wikipedia.org/wiki/Path%20integral%20formulation | Path integral formulation | The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization. Unlike previous methods, the path integral allows one to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals (for interactions of a certain type, these are coordinate space or Feynman path integrals), than the Hamiltonian. Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has proven to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away.
The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s, which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks.
The path integral has impacted a wide array of sciences, including polymer physics, quantum field theory, string theory and cosmology. In physics, it is a foundation for lattice gauge theory and quantum chromodynamics. It has been called the "most powerful formula in physics", with Stephen Wolfram also declaring it to be the "fundamental mathematical construct of modern quantum mechanics and quantum field theory".
The basic idea of the path integral formulation can be traced back to Norbert Wiener, who introduced the Wiener integral for solving problems in diffusion and Brownian motion. This idea was extended to the use of the Lagrangian in quantum mechanics by Paul Dirac, whose 1933 paper gave birth to path integral formulation. The complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier in his doctoral work under the supervision of John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian) as a starting point.
Quantum action principle
In quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time translations. This means that the state at a slightly later time differs from the state at the current time by the result of acting with the Hamiltonian operator (multiplied by the negative imaginary unit, ). For states with a definite energy, this is a statement of the de Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle.
The Hamiltonian in classical mechanics is derived from a Lagrangian, which is a more fundamental quantity in the context of special relativity. The Hamiltonian indicates how to march forward in time, but the time is different in different reference frames. The Lagrangian is a Lorentz scalar, while the Hamiltonian is the time component of a four-vector. So the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics.
The Hamiltonian is a function of the position and momentum at one time, and it determines the position and momentum a little later. The Lagrangian is a function of the position now and the position a little later (or, equivalently for infinitesimal time separations, it is a function of the position and velocity). The relation between the two is by a Legendre transformation, and the condition that determines the classical equations of motion (the Euler–Lagrange equations) is that the action has an extremum.
In quantum mechanics, the Legendre transform is hard to interpret, because the motion is not over a definite trajectory. In classical mechanics, with discretization in time, the Legendre transform becomes
and
where the partial derivative with respect to holds fixed. The inverse Legendre transform is
where
and the partial derivative now is with respect to at fixed .
In quantum mechanics, the state is a superposition of different states with different values of , or different values of , and the quantities and can be interpreted as noncommuting operators. The operator is only definite on states that are indefinite with respect to . So consider two states separated in time and act with the operator corresponding to the Lagrangian:
If the multiplications implicit in this formula are reinterpreted as matrix multiplications, the first factor is
and if this is also interpreted as a matrix multiplication, the sum over all states integrates over all , and so it takes the Fourier transform in to change basis to . That is the action on the Hilbert space – change basis to at time .
Next comes
or evolve an infinitesimal time into the future.
Finally, the last factor in this interpretation is
which means change basis back to at a later time.
This is not very different from just ordinary time evolution: the factor contains all the dynamical information – it pushes the state forward in time. The first part and the last part are just Fourier transforms to change to a pure basis from an intermediate basis.
Another way of saying this is that since the Hamiltonian is naturally a function of and , exponentiating this quantity and changing basis from to at each step allows the matrix element of to be expressed as a simple function along each path. This function is the quantum analog of the classical action. This observation is due to Paul Dirac.
Dirac further noted that one could square the time-evolution operator in the representation:
and this gives the time-evolution operator between time and time . While in the representation the quantity that is being summed over the intermediate states is an obscure matrix element, in the representation it is reinterpreted as a quantity associated to the path. In the limit that one takes a large power of this operator, one reconstructs the full quantum evolution between two states, the early one with a fixed value of and the later one with a fixed value of . The result is a sum over paths with a phase, which is the quantum action.
Classical limit
Crucially, Dirac identified the effect of the classical limit on the quantum form of the action principle:
That is, in the limit of action that is large compared to the Planck constant – the classical limit – the path integral is dominated by solutions that are in the neighborhood of stationary points of the action. The classical path arises naturally in the classical limit.
Feynman's interpretation
Dirac's work did not provide a precise prescription to calculate the sum over paths, and he did not show that one could recover the Schrödinger equation or the canonical commutation relations from this rule. This was done by Feynman.
Feynman showed that Dirac's quantum action was, for most cases of interest, simply equal to the classical action, appropriately discretized. This means that the classical action is the phase acquired by quantum evolution between two fixed endpoints. He proposed to recover all of quantum mechanics from the following postulates:
The probability for an event is given by the squared modulus of a complex number called the "probability amplitude".
The probability amplitude is given by adding together the contributions of all paths in configuration space.
The contribution of a path is proportional to , where is the action given by the time integral of the Lagrangian along the path.
In order to find the overall probability amplitude for a given process, then, one adds up, or integrates, the amplitude of the 3rd postulate over the space of all possible paths of the system in between the initial and final states, including those that are absurd by classical standards. In calculating the probability amplitude for a single particle to go from one space-time coordinate to another, it is correct to include paths in which the particle describes elaborate curlicues, curves in which the particle shoots off into outer space and flies back again, and so forth. The path integral assigns to all these amplitudes equal weight but varying phase, or argument of the complex number. Contributions from paths wildly different from the classical trajectory may be suppressed by interference (see below).
Feynman showed that this formulation of quantum mechanics is equivalent to the canonical approach to quantum mechanics when the Hamiltonian is at most quadratic in the momentum. An amplitude computed according to Feynman's principles will also obey the Schrödinger equation for the Hamiltonian corresponding to the given action.
The path integral formulation of quantum field theory represents the transition amplitude (corresponding to the classical correlation function) as a weighted sum of all possible histories of the system from the initial to the final state. A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude.
Path integral in quantum mechanics
Time-slicing derivation
One common approach to deriving the path integral formula is to divide the time interval into small pieces. Once this is done, the Trotter product formula tells us that the noncommutativity of the kinetic and potential energy operators can be ignored.
For a particle in a smooth potential, the path integral is approximated by zigzag paths, which in one dimension is a product of ordinary integrals. For the motion of the particle from position at time to at time , the time sequence
can be divided up into smaller segments , where , of fixed duration
This process is called time-slicing.
An approximation for the path integral can be computed as proportional to
where is the Lagrangian of the one-dimensional system with position variable and velocity considered (see below), and corresponds to the position at the th time step, if the time integral is approximated by a sum of terms.
In the limit , this becomes a functional integral, which, apart from a nonessential factor, is directly the product of the probability amplitudes (more precisely, since one must work with a continuous spectrum, the respective densities) to find the quantum mechanical particle at in the initial state and at in the final state .
Actually is the classical Lagrangian of the one-dimensional system considered,
and the abovementioned "zigzagging" corresponds to the appearance of the terms
in the Riemann sum approximating the time integral, which are finally integrated over to with the integration measure , is an arbitrary value of the interval corresponding to , e.g. its center, .
Thus, in contrast to classical mechanics, not only does the stationary path contribute, but actually all virtual paths between the initial and the final point also contribute.
Path integral
In terms of the wave function in the position representation, the path integral formula reads as follows:
where denotes integration over all paths with and where is a normalization factor. Here is the action, given by
Free particle
The path integral representation gives the quantum amplitude to go from point to point as an integral over all paths. For a free-particle action (for simplicity let , )
the integral can be evaluated explicitly.
To do this, it is convenient to start without the factor in the exponential, so that large deviations are suppressed by small numbers, not by cancelling oscillatory contributions. The amplitude (or Kernel) reads:
Splitting the integral into time slices:
where the is interpreted as a finite collection of integrations at each integer multiple of . Each factor in the product is a Gaussian as a function of centered at with variance . The multiple integrals are a repeated convolution of this Gaussian with copies of itself at adjacent times:
where the number of convolutions is . The result is easy to evaluate by taking the Fourier transform of both sides, so that the convolutions become multiplications:
The Fourier transform of the Gaussian is another Gaussian of reciprocal variance:
and the result is
The Fourier transform gives , and it is a Gaussian again with reciprocal variance:
The proportionality constant is not really determined by the time-slicing approach, only the ratio of values for different endpoint choices is determined. The proportionality constant should be chosen to ensure that between each two time slices the time evolution is quantum-mechanically unitary, but a more illuminating way to fix the normalization is to consider the path integral as a description of a stochastic process.
The result has a probability interpretation. The sum over all paths of the exponential factor can be seen as the sum over each path of the probability of selecting that path. The probability is the product over each segment of the probability of selecting that segment, so that each segment is probabilistically independently chosen. The fact that the answer is a Gaussian spreading linearly in time is the central limit theorem, which can be interpreted as the first historical evaluation of a statistical path integral.
The probability interpretation gives a natural normalization choice. The path integral should be defined so that
This condition normalizes the Gaussian and produces a kernel that obeys the diffusion equation:
For oscillatory path integrals, ones with an in the numerator, the time slicing produces convolved Gaussians, just as before. Now, however, the convolution product is marginally singular, since it requires careful limits to evaluate the oscillating integrals. To make the factors well defined, the easiest way is to add a small imaginary part to the time increment . This is closely related to Wick rotation. Then the same convolution argument as before gives the propagation kernel:
which, with the same normalization as before (not the sum-squares normalization – this function has a divergent norm), obeys a free Schrödinger equation:
This means that any superposition of s will also obey the same equation, by linearity. Defining
then obeys the free Schrödinger equation just as does:
Simple harmonic oscillator
The Lagrangian for the simple harmonic oscillator is
Write its trajectory as the classical trajectory plus some perturbation, and the action as . The classical trajectory can be written as
This trajectory yields the classical action
Next, expand the deviation from the classical path as a Fourier series, and calculate the contribution to the action , which gives
This means that the propagator is
for some normalization
Using the infinite-product representation of the sinc function,
the propagator can be written as
Let . One may write this propagator in terms of energy eigenstates as
Using the identities and , this amounts to
One may absorb all terms after the first into , thereby obtaining
One may finally expand in powers of : All terms in this expansion get multiplied by the factor in the front, yielding terms of the form
Comparison to the above eigenstate expansion yields the standard energy spectrum for the simple harmonic oscillator,
Coulomb potential
Feynman's time-sliced approximation does not, however, exist for the most important quantum-mechanical path integrals of atoms, due to the singularity of the Coulomb potential at the origin. Only after replacing the time by another path-dependent pseudo-time parameter
the singularity is removed and a time-sliced approximation exists, which is exactly integrable, since it can be made harmonic by a simple coordinate transformation, as discovered in 1979 by İsmail Hakkı Duru and Hagen Kleinert. The combination of a path-dependent time transformation and a coordinate transformation is an important tool to solve many path integrals and is called generically the Duru–Kleinert transformation.
The Schrödinger equation
The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times.
Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of , the path integral has most weight for close to . In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. (This separation of the kinetic and potential energy terms in the exponent is essentially the Trotter product formula.) The exponential of the action is
The first term rotates the phase of locally by an amount proportional to the potential energy. The second term is the free particle propagator, corresponding to times a diffusion process. To lowest order in they are additive; in any case one has with (1):
As mentioned, the spread in is diffusive from the free particle propagation, with an extra infinitesimal rotation in phase that slowly varies from point to point from the potential:
and this is the Schrödinger equation. The normalization of the path integral needs to be fixed in exactly the same way as in the free particle case. An arbitrary continuous potential does not affect the normalization, although singular potentials require careful treatment.
Equations of motion
Since the states obey the Schrödinger equation, the path integral must reproduce the Heisenberg equations of motion for the averages of and variables, but it is instructive to see this directly. The direct approach shows that the expectation values calculated from the path integral reproduce the usual ones of quantum mechanics.
Start by considering the path integral with some fixed initial state
Now at each separate time is a separate integration variable. So it is legitimate to change variables in the integral by shifting: where is a different shift at each time but , since the endpoints are not integrated:
The change in the integral from the shift is, to first infinitesimal order in :
which, integrating by parts in , gives:
But this was just a shift of integration variables, which doesn't change the value of the integral for any choice of . The conclusion is that this first order variation is zero for an arbitrary initial state and at any arbitrary point in time:
this is the Heisenberg equation of motion.
If the action contains terms that multiply and , at the same moment in time, the manipulations above are only heuristic, because the multiplication rules for these quantities is just as noncommuting in the path integral as it is in the operator formalism.
Stationary-phase approximation
If the variation in the action exceeds by many orders of magnitude, we typically have destructive interference other than in the vicinity of those trajectories satisfying the Euler–Lagrange equation, which is now reinterpreted as the condition for constructive interference. This can be shown using the method of stationary phase applied to the propagator. As decreases, the exponential in the integral oscillates rapidly in the complex domain for any change in the action. Thus, in the limit that goes to zero, only points where the classical action does not vary contribute to the propagator.
Canonical commutation relations
The formulation of the path integral does not make it clear at first sight that the quantities and do not commute. In the path integral, these are just integration variables and they have no obvious ordering. Feynman discovered that the non-commutativity is still present.
To see this, consider the simplest path integral, the brownian walk. This is not yet quantum mechanics, so in the path-integral the action is not multiplied by :
The quantity is fluctuating, and the derivative is defined as the limit of a discrete difference.
The distance that a random walk moves is proportional to , so that:
This shows that the random walk is not differentiable, since the ratio that defines the derivative diverges with probability one.
The quantity is ambiguous, with two possible meanings:
In elementary calculus, the two are only different by an amount that goes to 0 as goes to 0. But in this case, the difference between the two is not 0:
Let
Then is a rapidly fluctuating statistical quantity, whose average value is 1, i.e. a normalized "Gaussian process". The fluctuations of such a quantity can be described by a statistical Lagrangian
and the equations of motion for derived from extremizing the action corresponding to just set it equal to 1. In physics, such a quantity is "equal to 1 as an operator identity". In mathematics, it "weakly converges to 1". In either case, it is 1 in any expectation value, or when averaged over any interval, or for all practical purpose.
Defining the time order to be the operator order:
This is called the Itō lemma in stochastic calculus, and the (euclideanized) canonical commutation relations in physics.
For a general statistical action, a similar argument shows that
and in quantum mechanics, the extra imaginary unit in the action converts this to the canonical commutation relation,
Particle in curved space
For a particle in curved space the kinetic term depends on the position, and the above time slicing cannot be applied, this being a manifestation of the notorious operator ordering problem in Schrödinger quantum mechanics. One may, however, solve this problem by transforming the time-sliced flat-space path integral to curved space using a multivalued coordinate transformation (nonholonomic mapping explained here).
Measure-theoretic factors
Sometimes (e.g. a particle moving in curved space) we also have measure-theoretic factors in the functional integral:
This factor is needed to restore unitarity.
For instance, if
then it means that each spatial slice is multiplied by the measure . This measure cannot be expressed as a functional multiplying the measure because they belong to entirely different classes.
Expectation values and matrix elements
Matrix elements of the kind take the form
.
This generalizes to multiple operators, for example
,
and to the general vacuum expectation value (in the large time limit)
.
Euclidean path integrals
It is very common in path integrals to perform a Wick rotation from real to imaginary times. In the setting of quantum field theory, the Wick rotation changes the geometry of space-time from Lorentzian to Euclidean; as a result, Wick-rotated path integrals are often called Euclidean path integrals.
Wick rotation and the Feynman–Kac formula
If we replace by , the time-evolution operator is replaced by . (This change is known as a Wick rotation.) If we repeat the derivation of the path-integral formula in this setting, we obtain
,
where is the Euclidean action, given by
.
Note the sign change between this and the normal action, where the potential energy term is negative. (The term Euclidean is from the context of quantum field theory, where the change from real to imaginary time changes the space-time geometry from Lorentzian to Euclidean.)
Now, the contribution of the kinetic energy to the path integral is as follows:
where includes all the remaining dependence of the integrand on the path. This integral has a rigorous mathematical interpretation as integration against the Wiener measure, denoted . The Wiener measure, constructed by Norbert Wiener gives a rigorous foundation to Einstein's mathematical model of Brownian motion. The subscript indicates that the measure is supported on paths with .
We then have a rigorous version of the Feynman path integral, known as the Feynman–Kac formula:
,
where now satisfies the Wick-rotated version of the Schrödinger equation,
.
Although the Wick-rotated Schrödinger equation does not have a direct physical meaning, interesting properties of the Schrödinger operator can be extracted by studying it.
Much of the study of quantum field theories from the path-integral perspective, in both the mathematics and physics literatures, is done in the Euclidean setting, that is, after a Wick rotation. In particular, there are various results showing that if a Euclidean field theory with suitable properties can be constructed, one can then undo the Wick rotation to recover the physical, Lorentzian theory. On the other hand, it is much more difficult to give a meaning to path integrals (even Euclidean path integrals) in quantum field theory than in quantum mechanics.
Path integral and the partition function
The path integral is just the generalization of the integral above to all quantum mechanical problems—
is the action of the classical problem in which one investigates the path starting at time and ending at time , and denotes the integration measure over all paths. In the classical limit, , the path of minimum action dominates the integral, because the phase of any path away from this fluctuates rapidly and different contributions cancel.
The connection with statistical mechanics follows. Considering only paths that begin and end in the same configuration, perform the Wick rotation , i.e., make time imaginary, and integrate over all possible beginning-ending configurations. The Wick-rotated path integral—described in the previous subsection, with the ordinary action replaced by its "Euclidean" counterpart—now resembles the partition function of statistical mechanics defined in a canonical ensemble with inverse temperature proportional to imaginary time, . Strictly speaking, though, this is the partition function for a statistical field theory.
Clearly, such a deep analogy between quantum mechanics and statistical mechanics cannot be dependent on the formulation. In the canonical formulation, one sees that the unitary evolution operator of a state is given by
where the state is evolved from time . If one makes a Wick rotation here, and finds the amplitude to go from any state, back to the same state in (imaginary) time is given by
which is precisely the partition function of statistical mechanics for the same system at the temperature quoted earlier. One aspect of this equivalence was also known to Erwin Schrödinger who remarked that the equation named after him looked like the diffusion equation after Wick rotation. Note, however, that the Euclidean path integral is actually in the form of a classical statistical mechanics model.
Quantum field theory
Both the Schrödinger and Heisenberg approaches to quantum mechanics single out time and are not in the spirit of relativity. For example, the Heisenberg approach requires that scalar field operators obey the commutation relation
for two simultaneous spatial positions and , and this is not a relativistically invariant concept. The results of a calculation are covariant, but the symmetry is not apparent in intermediate stages. If naive field-theory calculations did not produce infinite answers in the continuum limit, this would not have been such a big problem – it would just have been a bad choice of coordinates. But the lack of symmetry means that the infinite quantities must be cut off, and the bad coordinates make it nearly impossible to cut off the theory without spoiling the symmetry. This makes it difficult to extract the physical predictions, which require a careful limiting procedure.
The problem of lost symmetry also appears in classical mechanics, where the Hamiltonian formulation also superficially singles out time. The Lagrangian formulation makes the relativistic invariance apparent. In the same way, the path integral is manifestly relativistic. It reproduces the Schrödinger equation, the Heisenberg equations of motion, and the canonical commutation relations and shows that they are compatible with relativity. It extends the Heisenberg-type operator algebra to operator product rules, which are new relations difficult to see in the old formalism.
Further, different choices of canonical variables lead to very different-seeming formulations of the same theory. The transformations between the variables can be very complicated, but the path integral makes them into reasonably straightforward changes of integration variables. For these reasons, the Feynman path integral has made earlier formalisms largely obsolete.
The price of a path integral representation is that the unitarity of a theory is no longer self-evident, but it can be proven by changing variables to some canonical representation. The path integral itself also deals with larger mathematical spaces than is usual, which requires more careful mathematics, not all of which has been fully worked out. The path integral historically was not immediately accepted, partly because it took many years to incorporate fermions properly. This required physicists to invent an entirely new mathematical object – the Grassmann variable – which also allowed changes of variables to be done naturally, as well as allowing constrained quantization.
The integration variables in the path integral are subtly non-commuting. The value of the product of two field operators at what looks like the same point depends on how the two points are ordered in space and time. This makes some naive identities fail.
Propagator
In relativistic theories, there is both a particle and field representation for every theory. The field representation is a sum over all field configurations, and the particle representation is a sum over different particle paths.
The nonrelativistic formulation is traditionally given in terms of particle paths, not fields. There, the path integral in the usual variables, with fixed boundary conditions, gives the probability amplitude for a particle to go from point to point in time :
This is called the propagator. To obtain the final state at we simply apply to the initial state and integrate over resulting in:
For a spatially homogeneous system, where is only a function of , the integral is a convolution, the final state is the initial state convolved with the propagator:
For a free particle of mass , the propagator can be evaluated either explicitly from the path integral or by noting that the Schrödinger equation is a diffusion equation in imaginary time, and the solution must be a normalized Gaussian:
Taking the Fourier transform in produces another Gaussian:
and in -space the proportionality factor here is constant in time, as will be verified in a moment. The Fourier transform in time, extending to be zero for negative times, gives Green's function, or the frequency-space propagator:
which is the reciprocal of the operator that annihilates the wavefunction in the Schrödinger equation, which wouldn't have come out right if the proportionality factor weren't constant in the -space representation.
The infinitesimal term in the denominator is a small positive number, which guarantees that the inverse Fourier transform in will be nonzero only for future times. For past times, the inverse Fourier transform contour closes toward values of where there is no singularity. This guarantees that propagates the particle into the future and is the reason for the subscript "F" on . The infinitesimal term can be interpreted as an infinitesimal rotation toward imaginary time.
It is also possible to reexpress the nonrelativistic time evolution in terms of propagators going toward the past, since the Schrödinger equation is time-reversible. The past propagator is the same as the future propagator except for the obvious difference that it vanishes in the future, and in the Gaussian is replaced by . In this case, the interpretation is that these are the quantities to convolve the final wavefunction so as to get the initial wavefunction:
Given the nearly identical only change is the sign of and , the parameter in Green's function can either be the energy if the paths are going toward the future, or the negative of the energy if the paths are going toward the past.
For a nonrelativistic theory, the time as measured along the path of a moving particle and the time as measured by an outside observer are the same. In relativity, this is no longer true. For a relativistic theory the propagator should be defined as the sum over all paths that travel between two points in a fixed proper time, as measured along the path (these paths describe the trajectory of a particle in space and in time):
The integral above is not trivial to interpret because of the square root. Fortunately, there is a heuristic trick. The sum is over the relativistic arc length of the path of an oscillating quantity, and like the nonrelativistic path integral should be interpreted as slightly rotated into imaginary time. The function can be evaluated when the sum is over paths in Euclidean space:
This describes a sum over all paths of length of the exponential of minus the length. This can be given a probability interpretation. The sum over all paths is a probability average over a path constructed step by step. The total number of steps is proportional to , and each step is less likely the longer it is. By the central limit theorem, the result of many independent steps is a Gaussian of variance proportional to :
The usual definition of the relativistic propagator only asks for the amplitude is to travel from to , after summing over all the possible proper times it could take:
where is a weight factor, the relative importance of paths of different proper time. By the translation symmetry in proper time, this weight can only be an exponential factor and can be absorbed into the constant :
This is the Schwinger representation. Taking a Fourier transform over the variable can be done for each value of separately, and because each separate contribution is a Gaussian, gives whose Fourier transform is another Gaussian with reciprocal width. So in -space, the propagator can be reexpressed simply:
which is the Euclidean propagator for a scalar particle. Rotating to be imaginary gives the usual relativistic propagator, up to a factor of and an ambiguity, which will be clarified below:
This expression can be interpreted in the nonrelativistic limit, where it is convenient to split it by partial fractions:
For states where one nonrelativistic particle is present, the initial wavefunction has a frequency distribution concentrated near . When convolving with the propagator, which in space just means multiplying by the propagator, the second term is suppressed and the first term is enhanced. For frequencies near , the dominant first term has the form
This is the expression for the nonrelativistic Green's function of a free Schrödinger particle.
The second term has a nonrelativistic limit also, but this limit is concentrated on frequencies that are negative. The second pole is dominated by contributions from paths where the proper time and the coordinate time are ticking in an opposite sense, which means that the second term is to be interpreted as the antiparticle. The nonrelativistic analysis shows that with this form the antiparticle still has positive energy.
The proper way to express this mathematically is that, adding a small suppression factor in proper time, the limit where of the first term must vanish, while the limit of the second term must vanish. In the Fourier transform, this means shifting the pole in slightly, so that the inverse Fourier transform will pick up a small decay factor in one of the time directions:
Without these terms, the pole contribution could not be unambiguously evaluated when taking the inverse Fourier transform of . The terms can be recombined:
which when factored, produces opposite-sign infinitesimal terms in each factor. This is the mathematically precise form of the relativistic particle propagator, free of any ambiguities. The term introduces a small imaginary part to the , which in the Minkowski version is a small exponential suppression of long paths.
So in the relativistic case, the Feynman path-integral representation of the propagator includes paths going backwards in time, which describe antiparticles. The paths that contribute to the relativistic propagator go forward and backwards in time, and the interpretation of this is that the amplitude for a free particle to travel between two points includes amplitudes for the particle to fluctuate into an antiparticle, travel back in time, then forward again.
Unlike the nonrelativistic case, it is impossible to produce a relativistic theory of local particle propagation without including antiparticles. All local differential operators have inverses that are nonzero outside the light cone, meaning that it is impossible to keep a particle from travelling faster than light. Such a particle cannot have a Green's function that is only nonzero in the future in a relativistically invariant theory.
Functionals of fields
However, the path integral formulation is also extremely important in direct application to quantum field theory, in which the "paths" or histories being considered are not the motions of a single particle, but the possible time evolutions of a field over all space. The action is referred to technically as a functional of the field: , where the field is itself a function of space and time, and the square brackets are a reminder that the action depends on all the field's values everywhere, not just some particular value. One such given function of spacetime is called a field configuration. In principle, one integrates Feynman's amplitude over the class of all possible field configurations.
Much of the formal study of QFT is devoted to the properties of the resulting functional integral, and much effort (not yet entirely successful) has been made toward making these functional integrals mathematically precise.
Such a functional integral is extremely similar to the partition function in statistical mechanics. Indeed, it is sometimes called a partition function, and the two are essentially mathematically identical except for the factor of in the exponent in Feynman's postulate 3. Analytically continuing the integral to an imaginary time variable (called a Wick rotation) makes the functional integral even more like a statistical partition function and also tames some of the mathematical difficulties of working with these integrals.
Expectation values
In quantum field theory, if the action is given by the functional of field configurations (which only depends locally on the fields), then the time-ordered vacuum expectation value of polynomially bounded functional , , is given by
The symbol here is a concise way to represent the infinite-dimensional integral over all possible field configurations on all of space-time. As stated above, the unadorned path integral in the denominator ensures proper normalization.
As a probability
Strictly speaking, the only question that can be asked in physics is: What fraction of states satisfying condition also satisfy condition ? The answer to this is a number between 0 and 1, which can be interpreted as a conditional probability, written as . In terms of path integration, since , this means
where the functional is the superposition of all incoming states that could lead to the states we are interested in. In particular, this could be a state corresponding to the state of the Universe just after the Big Bang, although for actual calculation this can be simplified using heuristic methods. Since this expression is a quotient of path integrals, it is naturally normalised.
Schwinger–Dyson equations
Since this formulation of quantum mechanics is analogous to classical action principle, one might expect that identities concerning the action in classical mechanics would have quantum counterparts derivable from a functional integral. This is often the case.
In the language of functional analysis, we can write the Euler–Lagrange equations as
(the left-hand side is a functional derivative; the equation means that the action is stationary under small changes in the field configuration). The quantum analogues of these equations are called the Schwinger–Dyson equations.
If the functional measure turns out to be translationally invariant (we'll assume this for the rest of this article, although this does not hold for, let's say nonlinear sigma models), and if we assume that after a Wick rotation
which now becomes
for some , it goes to zero faster than a reciprocal of any polynomial for large values of , then we can integrate by parts (after a Wick rotation, followed by a Wick rotation back) to get the following Schwinger–Dyson equations for the expectation:
for any polynomially-bounded functional . In the deWitt notation this looks like
These equations are the analog of the on-shell EL equations. The time ordering is taken before the time derivatives inside the .
If (called the source field) is an element of the dual space of the field configurations (which has at least an affine structure because of the assumption of the translational invariance for the functional measure), then the generating functional of the source fields is defined to be
Note that
or
where
Basically, if is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT, unlike its Wick-rotated statistical mechanics analogue, because we have time ordering complications here!), then are its moments, and is its Fourier transform.
If is a functional of , then for an operator , is defined to be the operator that substitutes for . For example, if
and is a functional of , then
Then, from the properties of the functional integrals
we get the "master" Schwinger–Dyson equation:
or
If the functional measure is not translationally invariant, it might be possible to express it as the product , where is a functional and is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to . However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense.
In that case, we would have to replace the in this equation by another functional
If we expand this equation as a Taylor series about J 0, we get the entire set of Schwinger–Dyson equations.
Localization
The path integrals are usually thought of as being the sum of all paths through an infinite space–time. However, in local quantum field theory we would restrict everything to lie within a finite causally complete region, for example inside a double light-cone. This gives a more mathematically precise and physically rigorous definition of quantum field theory.
Ward–Takahashi identities
Now how about the on shell Noether's theorem for the classical case? Does it have a quantum analog as well? Yes, but with a caveat. The functional measure would have to be invariant under the one parameter group of symmetry transformation as well.
Let's just assume for simplicity here that the symmetry in question is local (not local in the sense of a gauge symmetry, but in the sense that the transformed value of the field at any given point under an infinitesimal transformation would only depend on the field configuration over an arbitrarily small neighborhood of the point in question). Let's also assume that the action is local in the sense that it is the integral over spacetime of a Lagrangian, and that
for some function where only depends locally on (and possibly the spacetime position).
If we don't assume any special boundary conditions, this would not be a "true" symmetry in the true sense of the term in general unless or something. Here, is a derivation that generates the one parameter group in question. We could have antiderivations as well, such as BRST and supersymmetry.
Let's also assume
for any polynomially-bounded functional . This property is called the invariance of the measure, and this does not hold in general. (See anomaly (physics) for more details.)
Then,
which implies
where the integral is over the boundary. This is the quantum analog of Noether's theorem.
Now, let's assume even further that is a local integral
where
so that\
where
(this is assuming the Lagrangian only depends on and its first partial derivatives! More general Lagrangians would require a modification to this definition!). We're not insisting that is the generator of a symmetry (i.e. we are not insisting upon the gauge principle), but just that is. And we also assume the even stronger assumption that the functional measure is locally invariant:
Then, we would have
Alternatively,
The above two equations are the Ward–Takahashi identities.
Now for the case where , we can forget about all the boundary conditions and locality assumptions. We'd simply have
Alternatively,
Caveats
Need for regulators and renormalization
Path integrals as they are defined here require the introduction of regulators. Changing the scale of the regulator leads to the renormalization group. In fact, renormalization is the major obstruction to making path integrals well-defined.
Ordering prescription
Regardless of whether one works in configuration space or phase space, when equating the operator formalism and the path integral formulation, an ordering prescription is required to resolve the ambiguity in the correspondence between non-commutative operators and the commutative functions that appear in path integrands. For example, the operator can be translated back as either , , or depending on whether one chooses the , , or Weyl ordering prescription; conversely, can be translated to either , , or for the same respective choice of ordering prescription.
Path integral in quantum-mechanical interpretation
In one interpretation of quantum mechanics, the "sum over histories" interpretation, the path integral is taken to be fundamental, and reality is viewed as a single indistinguishable "class" of paths that all share the same events. For this interpretation, it is crucial to understand what exactly an event is. The sum-over-histories method gives identical results to canonical quantum mechanics, and Sinha and Sorkin claim the interpretation explains the Einstein–Podolsky–Rosen paradox without resorting to nonlocality.
Some advocates of interpretations of quantum mechanics emphasizing decoherence have attempted to make more rigorous the notion of extracting a classical-like "coarse-grained" history from the space of all possible histories.
Quantum gravity
Whereas in quantum mechanics the path integral formulation is fully equivalent to other formulations, it may be that it can be extended to quantum gravity, which would make it different from the Hilbert space model. Feynman had some success in this direction, and his work has been extended by Hawking and others. Approaches that use this method include causal dynamical triangulations and spinfoam models.
Quantum tunneling
Quantum tunnelling can be modeled by using the path integral formation to determine the action of the trajectory through a potential barrier. Using the WKB approximation, the tunneling rate () can be determined to be of the form
with the effective action and pre-exponential factor . This form is specifically useful in a dissipative system, in which the systems and surroundings must be modeled together. Using the Langevin equation to model Brownian motion, the path integral formation can be used to determine an effective action and pre-exponential model to see the effect of dissipation on tunnelling. From this model, tunneling rates of macroscopic systems (at finite temperatures) can be predicted.
| Physical sciences | Quantum mechanics | Physics |
438703 | https://en.wikipedia.org/wiki/Dormouse | Dormouse | A dormouse is a rodent of the family Gliridae (this family is also variously called Myoxidae or Muscardinidae by different taxonomists). Dormice are nocturnal animals found in Africa, Asia, and Europe. They are named for their long, dormant hibernation period of six months or longer. There are 9 genera and 28 living species of dormice, with half of living species belonging to the African genus Graphiurus.
Etymology
The word dormouse comes from Middle English , of uncertain origin, possibly from a dialectal element *dor-, from Old Norse and Middle English .
The word is sometimes conjectured to come from an Anglo-Norman derivative of , with the second element mistaken for mouse, but no such Anglo-Norman term is known to have existed.
The Latin noun , which is the origin of the scientific name, descends from the Proto-Indo-European noun *gl̥h₁éys , and is related to Sanskrit () and Ancient Greek () .
Characteristics
Dormice are small rodents, with body lengths between , and weight between . They are generally mouse-like in appearance, but with furred tails. They are largely arboreal, agile, and well adapted to climbing. Most species are nocturnal. Dormice have an excellent sense of hearing and signal each other with a variety of vocalisations.
Dormice are omnivorous, and typically feed on berries, flowers, fruits, insects, and nuts. They are unique among rodents in that they lack a cecum, a part of the gut used in other species to ferment vegetable matter. Their dental formula is similar to that of squirrels, although they often lack premolars:
Dormice breed once (or, occasionally, twice) each year, producing litters with an average of four young after a gestation period of 22–24 days. They can live for as long as five years. The young are born hairless and helpless, and their eyes do not open until about 18 days after birth. They typically become sexually mature after the end of their first hibernation. Dormice live in small family groups, with home ranges that vary widely between species and depend on the availability of food.
Hibernation
One of the most notable characteristics of those dormice that live in temperate zones is hibernation. They can hibernate six months out of the year, or even longer if the weather does not become warm enough, sometimes waking for brief periods to eat food they had previously stored nearby. During the summer, they accumulate fat in their bodies to nourish them through the hibernation period.
Relationship with humans
The edible dormouse (Glis glis) was considered a delicacy in ancient Rome, either as a savoury appetizer or as a dessert (dipped in honey and poppy seeds). The Romans used a special kind of enclosure, a glirarium, to raise and fatten dormice for the table. It is still considered a delicacy in Slovenia and in several places in Croatia, namely Lika, and the islands of Hvar and Brač. Dormouse fat was believed by the Elizabethans to induce sleep since the animal put on fat before hibernating.
In more recent years, dormice have begun to enter the pet trade; however, they are uncommon as pets and are considered an exotic pet. The woodland dormouse (Graphiurus murinus) is the most commonly seen species in the pet trade. Asian garden dormice (Eliomys melanurus) are also occasionally kept as pets.
Evolution
Dormice likely originated in Europe, with the earliest dormouse genus Eogliravus being known from the Early Eocene (around 48-41 million years ago) of France. Dormice were relatively undiverse in the Eocene, but considerably diversified during the Oligocene (34-23 million years ago). Their ability to hibernate may have emerged during this period. They reached an apex of diversity during the late Early Miocene (around 17 million years ago) when there were 18 genera and 36 species of dormice in Europe alone during this period. During this timespan, dormice represented the dominant group of rodents in Europe. The earliest Asian dormice are known from the early Miocene, and the Miocene saw the emergence of several of the modern genera of living dormice. The diversity of dormice saw continual decline unti the middle Pliocene, when there was again a period of speciation, mostly driven by the diversification of the African Graphiurus, which first appeared during the Pliocene, while the diversity of European dormice remained relatively low compared to their Miocene peak. Several dormouse lineaged experienced insular gigantism after being isolated on islands in the Mediterranean during the Pliocene and Pleistocene, the largest being the rabbit sized Leithia of Sicily and Malta, the largest dormouse ever.
Classification
The family consists of 29 extant species, in three subfamilies and (arguably) nine genera:
Cladogram of most living and recently extinct dormice genera based on mitochondrial DNA after Petrova et al. 2024:Family Gliridae – Dormice
Subfamily Glirinae
Genus Glirulus
Japanese dormouse, Glirulus japonicus
Genus Glis
European edible dormouse, Glis glis
Iranian edible dormouse, Glis persicus
Subfamily Graphiurinae
Genus Graphiurus, African dormice
Angolan African dormouse, Graphiurus angolensis
Christy's dormouse, Graphiurus christyi
Walter Verheyen's African dormouse, Graphiurus walterverheyeni
Jentink's dormouse, Graphiurus crassicaudatus
Johnston's African dormouse, Graphiurus johnstoni
Kellen's dormouse, Graphiurus kelleni
Lorrain dormouse, Graphiurus lorraineus
Monard's dormouse, Graphiurus monardi
Nagtglas's African dormouse, Graphiurus nagtglasii
Rock dormouse, Graphiurus platyops
Silent dormouse, Graphiurus surdus
Small-eared dormouse, Graphiurus microtis
Spectacled dormouse, Graphiurus ocularis
Stone dormouse, Graphiurus rupicola
Woodland dormouse, Graphiurus murinus
Subfamily Leithiinae
Genus Chaetocauda
Chinese dormouse, Chaetocauda sichuanensis
Genus Dryomys
Balochistan forest dormouse, Dryomys niethammeri
Forest dormouse, Dryomys nitedula
Woolly dormouse, Dryomys laniger
Genus Eliomys, garden dormice
Asian garden dormouse, Eliomys melanurus
Garden dormouse, Eliomys quercinus
Maghreb garden dormouse, Eliomys munbyanus
Genus Hypnomys† (Balearic dormouse)
Majorcan giant dormouse, Hypnomys morphaeus†
Minorcan giant dormouse, Hypnomys mahonensis†
Genus Leithia†
Leithia cartei†
Maltese giant dormouse, Leithia melitensis†
Genus Muscardinus
Hazel dormouse, Muscardinus avellanarius
Genus Myomimus, mouse-tailed dormice
Masked mouse-tailed dormouse, Myomimus personatus
Roach's mouse-tailed dormouse, Myomimus roachi
Setzer's mouse-tailed dormouse, Myomimus setzeri
Genus Selevinia
Desert dormouse, Selevinia betpakdalaensis
† indicates an extinct species.
Fossil genera and species
Genus Bransatoglis
Bransatoglis adroveri Majorca, Early Oligocene
Bransatoglis planus Eurasia, Early Oligocene
Glamys Vianey-Liaud, 1989
Oligodyromys Bahlo, 1975
Vasseuromys Baudelot & de Bonis, 1966
Butseloglis Vianey-Liaud, 2003
Microdyromys de Bruijn, 1966
Glirudinus de Bruijn, 1966
Graphiurops Bachmayer & Wilson, 1980
Eogliravus Hartenberger, 1971s
Armantomys de Bruijn, 1966
Miodyromys Kretzoi,Quote1943
Praearmantomys de Bruijn, 1966
Pseudodryomys de Bruijn,1966
Simplomys García-Paredes et al. 2009
Tempestia van de Weerd, 1976
Ramys García-Moreno & Lopez-Martínez,1986
Moissenetia Hugueney & Adrover, 1995
Paraglis Baudelot, 1970
Seorsumuscardinus de Bruijn 1998
Peridyromys Stehlin & Schaub, 1951
Carbomys Mein & Adrover, 1982
Prodryomys Mayr, 1979
| Biology and health sciences | Rodents | null |
438758 | https://en.wikipedia.org/wiki/Mammalogy | Mammalogy | In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as mastology, theriology, and therology. The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals.
Mammalogy branches off into other taxonomically oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things.
Research purposes
Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures.
Mammalogist
A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This depends on employer and state.
History
The first people recorded to have researched mammals were the ancient Greeks with records on mammals that were not even native to Greece and others that were. Aristotle was one of the first to recognize whales and dolphins as mammals since up until the 18th century most of the study was done by taxonomy.
Journals
This is a list of scientific journals broadly serving mammalogists. In addition, many other more general zoology, ecology and evolution, or conservation journals also deal with mammals, and several journals are specific to only certain taxonomic groups of mammals.
| Biology and health sciences | Basics_2 | Biology |
438944 | https://en.wikipedia.org/wiki/Actin | Actin | Actin is a family of globular multi-functional proteins that form microfilaments in the cytoskeleton, and the thin filaments in muscle fibrils. It is found in essentially all eukaryotic cells, where it may be present at a concentration of over 100 μM; its mass is roughly 42 kDa, with a diameter of 4 to 7 nm.
An actin protein is the monomeric subunit of two types of filaments in cells: microfilaments, one of the three major components of the cytoskeleton, and thin filaments, part of the contractile apparatus in muscle cells. It can be present as either a free monomer called G-actin (globular) or as part of a linear polymer microfilament called F-actin (filamentous), both of which are essential for such important cellular functions as the mobility and contraction of cells during cell division.
Actin participates in many important cellular processes, including muscle contraction, cell motility, cell division and cytokinesis, vesicle and organelle movement, cell signaling, and the establishment and maintenance of cell junctions and cell shape. Many of these processes are mediated by extensive and intimate interactions of actin with cellular membranes. In vertebrates, three main groups of actin isoforms, alpha, beta, and gamma have been identified. The alpha actins, found in muscle tissues, are a major constituent of the contractile apparatus. The beta and gamma actins coexist in most cell types as components of the cytoskeleton, and as mediators of internal cell motility. It is believed that the diverse range of structures formed by actin enabling it to fulfill such a large range of functions is regulated through the binding of tropomyosin along the filaments.
A cell's ability to dynamically form microfilaments provides the scaffolding that allows it to rapidly remodel itself in response to its environment or to the organism's internal signals, for example, to increase cell membrane absorption or increase cell adhesion in order to form cell tissue. Other enzymes or organelles such as cilia can be anchored to this scaffolding in order to control the deformation of the external cell membrane, which allows endocytosis and cytokinesis. It can also produce movement either by itself or with the help of molecular motors. Actin therefore contributes to processes such as the intracellular transport of vesicles and organelles as well as muscular contraction and cellular migration. It therefore plays an important role in embryogenesis, the healing of wounds, and the invasivity of cancer cells. The evolutionary origin of actin can be traced to prokaryotic cells, which have equivalent proteins. Actin homologs from prokaryotes and archaea polymerize into different helical or linear filaments consisting of one or multiple strands. However the in-strand contacts and nucleotide binding sites are preserved in prokaryotes and in archaea. Lastly, actin plays an important role in the control of gene expression.
A large number of illnesses and diseases are caused by mutations in alleles of the genes that regulate the production of actin or of its associated proteins. The production of actin is also key to the process of infection by some pathogenic microorganisms. Mutations in the different genes that regulate actin production in humans can cause muscular diseases, variations in the size and function of the heart as well as deafness. The make-up of the cytoskeleton is also related to the pathogenicity of intracellular bacteria and viruses, particularly in the processes related to evading the actions of the immune system.
Function
Actin's primary role in the cell is to form linear polymers called microfilaments that serve various functions in the cell's structure, trafficking networks, migration, and replication. The multifaceted role of actin relies on a few of the microfilaments' properties: First, the formation of actin filaments is reversible, and their function often involves undergoing rapid polymerization and depolymerization. Second, microfilaments are polarized – i.e. the two ends of a filament are distinct from one another. Third, actin filaments can bind to many other proteins, which together help modify and organize microfilaments for their diverse functions.
In most cells actin filaments form larger-scale networks which are essential for many key functions:
Actin networks give mechanical support to cells and provide trafficking routes through the cytoplasm to aid signal transduction.
Rapid assembly and disassembly of actin network enables cells to migrate (Cell migration).
Actin is extremely abundant in most cells, comprising 1–5% of the total protein mass of most cells, and 10% of muscle cells.
The actin protein is found in both the cytoplasm and the cell nucleus. Its location is regulated by cell membrane signal transduction pathways that integrate the stimuli that a cell receives stimulating the restructuring of the actin networks in response.
Cytoskeleton
There are a number of different types of actin with slightly different structures and functions. α-actin is found exclusively in muscle fibres, while β- and γ-actin are found in other cells. As the latter types have a high turnover rate the majority of them are found outside permanent structures. Microfilaments found in cells other than muscle cells are present in three forms:
Microfilament networks - Animal cells commonly have a cell cortex under the cell membrane that contains a large number of microfilaments, which precludes the presence of organelles. This network is connected with numerous receptors that relay signals to the outside of a cell.
Periodic actin rings - A periodic structure constructed of evenly spaced actin rings is found in axons. In this structure, the actin rings, together with spectrin tetramers that link the neighboring actin rings, form a cohesive cytoskeleton that supports the axon membrane. The structure periodicity may also regulate the sodium ion channels in axons.
Yeasts
Actin's cytoskeleton is key to the processes of endocytosis, cytokinesis, determination of cell polarity and morphogenesis in yeasts. In addition to relying on actin, these processes involve 20 or 30 associated proteins, which all have a high degree of evolutionary conservation, along with many signalling molecules. Together these elements allow a spatially and temporally modulated assembly that defines a cell's response to both internal and external stimuli.
Yeasts contain three main elements that are associated with actin: patches, cables, and rings. Despite not being present for long, these structures are subject to a dynamic equilibrium due to continual polymerization and depolymerization. They possess a number of accessory proteins including ADF/cofilin, which has a molecular weight of 16kDa and is coded for by a single gene, called COF1; Aip1, a cofilin cofactor that promotes the disassembly of microfilaments; Srv2/CAP, a process regulator related to adenylate cyclase proteins; a profilin with a molecular weight of approximately 14 kDa that is related/associated with actin monomers; and twinfilin, a 40 kDa protein involved in the organization of patches.
Plants
Plant genome studies have revealed the existence of protein isovariants within the actin family of genes. Within Arabidopsis thaliana, a model organism, there are ten types of actin, six profilins, and dozens of myosins. This diversity is explained by the evolutionary necessity of possessing variants that slightly differ in their temporal and spatial expression. The majority of these proteins were jointly expressed in the tissue analysed. Actin networks are distributed throughout the cytoplasm of cells that have been cultivated in vitro. There is a concentration of the network around the nucleus that is connected via spokes to the cellular cortex, this network is highly dynamic, with a continuous polymerization and depolymerization.
Even though the majority of plant cells have a cell wall that defines their morphology, their microfilaments can generate sufficient force to achieve a number of cellular activities, such as the cytoplasmic currents generated by the microfilaments and myosin. Actin is also involved in the movement of organelles and in cellular morphogenesis, which involve cell division as well as the elongation and differentiation of the cell.
The most notable proteins associated with the actin cytoskeleton in plants include: villin, which belongs to the same family as gelsolin/severin and is able to cut microfilaments and bind actin monomers in the presence of calcium cations; fimbrin, which is able to recognize and unite actin monomers and which is involved in the formation of networks (by a different regulation process from that of animals and yeasts); formins, which are able to act as an F-actin polymerization nucleating agent; myosin, a typical molecular motor that is specific to eukaryotes and which in Arabidopsis thaliana is coded for by 17 genes in two distinct classes; CHUP1, which can bind actin and is implicated in the spatial distribution of chloroplasts in the cell; KAM1/MUR3 that define the morphology of the Golgi apparatus as well as the composition of xyloglucans in the cell wall; NtWLIM1, which facilitates the emergence of actin cell structures; and ERD10, which is involved in the association of organelles within membranes and microfilaments and which seems to play a role that is involved in an organism's reaction to stress.
Nuclear actin
Nuclear actin was first noticed and described in 1977 by Clark and Merriam. Authors describe a protein present in the nuclear fraction, obtained from Xenopus laevis oocytes, which shows the same features as skeletal muscle actin. Since that time there have been many scientific reports about the structure and functions of actin in the nucleus (for review see: Hofmann 2009.) The controlled level of actin in the nucleus, its interaction with actin-binding proteins (ABP) and the presence of different isoforms allows actin to play an important role in many important nuclear processes.
Transport through the nuclear membrane
The actin sequence does not contain a nuclear localization signal. The small size of actin (about 43 kDa) allows it to enter the nucleus by passive diffusion. The import of actin into the nucleus (probably in a complex with cofilin) is facilitated by the import protein importin 9.
Low levels of actin in the nucleus seems to be important, because actin has two nuclear export signals (NES) in its sequence. Microinjected actin is quickly removed from the nucleus to the cytoplasm. Actin is exported at least in two ways, through exportin 1 and exportin 6. Specific modifications, such as SUMOylation, allows for nuclear actin retention. A mutation preventing SUMOylation causes rapid export of beta actin from the nucleus.
Organization
Nuclear actin exists mainly as a monomer, but can also form dynamic oligomers and short polymers. Nuclear actin organization varies in different cell types. For example, in Xenopus oocytes (with higher nuclear actin level in comparison to somatic cells) actin forms filaments, which stabilize nucleus architecture. These filaments can be observed under the microscope thanks to fluorophore-conjugated phalloidin staining.
In somatic cell nuclei, however, actin filaments cannot be observed using this technique. The DNase I inhibition assay, the only test which allows the quantification of the polymerized actin directly in biological samples, has revealed that endogenous nuclear actin indeed occurs mainly in a monomeric form.
Precisely controlled level of actin in the cell nucleus, lower than in the cytoplasm, prevents the formation of filaments. The polymerization is also reduced by the limited access to actin monomers, which are bound in complexes with ABPs, mainly cofilin.
Actin isoforms
Different isoforms of actin are present in the cell nucleus. The level of actin isoforms may change in response to stimulation of cell growth or arrest of proliferation and transcriptional activity. Research on nuclear actin is focused on isoform beta. However the use of antibodies directed against different actin isoforms allows identifying not only the cytoplasmic beta in the cell nucleus, but also alpha- and gamma-actin in certain cell types. The presence of different isoforms of actin may have a significant effect on its function in nuclear processes, as the level of individual isoforms can be controlled independently.
Functions
Functions of actin in the nucleus are associated with its ability to polymerize and interact with various ABPs and with structural elements of the nucleus. Nuclear actin is involved in:
Architecture of the nucleus - Interaction of actin with alpha II-spectrin and other proteins are important for maintaining proper shape of the nucleus.
Transcription – Actin is involved in chromatin reorganization, transcription initiation and interaction with the transcription complex. Actin takes part in the regulation of chromatin structure, interacting with RNA polymerase I, II and III. In Pol I transcription, actin and myosin (MYO1C, which binds DNA) act as a molecular motor. For Pol II transcription, β-actin is needed for the formation of the preinitiation complex. Pol III contains β-actin as a subunit. Actin can also be a component of chromatin remodelling complexes as well as pre-mRNP particles (that is, precursor messenger RNA bundled in proteins), and is involved in nuclear export of RNAs and proteins.
Regulation of gene activity – Actin binds to the regulatory regions of different kinds of genes. Actin's ability to regulate gene activity is used in the molecular reprogramming method, which allows differentiated cells return to their embryonic state.
Translocation of the activated chromosome fragment from under membrane region to euchromatin where transcription starts. This movement requires the interaction of actin and myosin.
Integration of different cellular compartments. Actin is a molecule that integrates cytoplasmic and nuclear signal transduction pathways. An example is the activation of transcription in response to serum stimulation of cells in vitro.
Immune response - Nuclear actin polymerizes upon T-cell receptor stimulation and is required for cytokine expression and antibody production in vivo.
DNA repair - Nuclear actin mediates the repair of DNA double-strand breaks. In the cell nucleus, a filamentous polymer of actin (F-actin) acts both in the DNA repair pathway of non homologous end joining and in the pathway of homologous recombinational repair.
Due to its ability to undergo conformational changes and interaction with many proteins, actin acts as a regulator of formation and activity of protein complexes such as transcriptional complex.
Cell movement
Actin is also involved in cell movement. A meshwork of actin filaments marks the forward edge of a moving cell, and the polymerization of new actin filaments pushes the cell membrane forward in protrusions called lamellipodia. These membrane protrusions then attach to the substrate, forming structures known as focal adhesions that connect to the actin network. Once attached, the rear of the cell body contracts squeezing its contents forward past the adhesion point. Once the adhesion point has moved to the rear of the cell, the cell disassembles it, allowing the rear of the cell to move forward.
Actin/myosin movement
In addition to the physical force generated by actin polymerization, microfilaments facilitate the movement of various intracellular components by serving as the roadway along which a family of motor proteins called myosins travel.
Muscle contraction
Actin plays a particularly prominent role in muscle cells, which consist largely of repeated bundles of actin and myosin II. Each repeated unit – called a sarcomere – consists of two sets of oppositely oriented F-actin strands ("thin filaments"), interlaced with bundles of myosin ("thick filaments"). The two sets of actin strands are oriented with their (+) ends embedded in either end of the sarcomere in delimiting structures called Z-disks. The myosin fibrils are in the middle between the sets of actin filaments, with strands facing in both directions. When the muscle contracts, the myosin threads move along the actin filaments towards the (+) end, pulling the ends of the sarcomere together and shortening it by around 70% of its length. In order to move along the actin thread, myosin must hydrolyze ATP; thus ATP serves as the energy source for muscle contraction.
At times of rest, the proteins tropomyosin and troponin bind to the actin filaments, preventing the attachment of myosin. When an activation signal (i.e. an action potential) arrives at the muscle fiber, it triggers the release of Ca2+ from the sarcoplasmic reticulum into the cytosol. The resulting spike in cytosolic calcium rapidly releases tropomyosin and troponin from the actin thread, allowing myosin to bind, and muscle contracation to begin.
Cell division
In the final stages of cell division, many cells form a ring of actin at the cell's midpoint. This ring, aptly called the "contractile ring", uses a similar mechanism as muscle fibers where myosin II pulls along the actin ring, causing it to contract. This contraction cleaves the parent cell into two, completing cytokinesis. The contractile ring is composed of actin, myosin, anillin, and α-actinin. In the fission yeast Schizosaccharomyces pombe, actin is actively formed in the constricting ring with the participation of Arp3, the formin Cdc12, profilin, and WASp, along with preformed microfilaments. Once the ring has been constructed the structure is maintained by a continual assembly and disassembly that, aided by the Arp2/3 complex and formins, is key to one of the central processes of cytokinesis.
Intracellular trafficking
Actin-myosin pairs can also participate in the trafficking of various membrane vesicles and organelles within the cell. Myosin V is activated by binding to various cargo receptors on organelles, and then moves along an actin filament towards the (+) end, pulling its cargo along with it.
These nonconventional myosins use ATP hydrolysis to transport cargo, such as vesicles and organelles, in a directed fashion much faster than diffusion. Myosin V walks towards the barbed end of actin filaments, while myosin VI walks toward the pointed end. Most actin filaments are arranged with the barbed end toward the cellular membrane and the pointed end toward the cellular interior. This arrangement allows myosin V to be an effective motor for the export of cargos, and myosin VI to be an effective motor for import.
Other biological processes
The traditional image of actin's function relates it to the maintenance of the cytoskeleton and, therefore, the organization and movement of organelles, as well as the determination of a cell's shape. However, actin has a wider role in eukaryotic cell physiology, in addition to similar functions in prokaryotes.
Apoptosis. During programmed cell death the ICE/ced-3 family of proteases (one of the interleukin-1β-converter proteases) degrade actin into two fragments in vivo; one of the fragments is 15 kDa and the other 31 kDa. This represents one of the mechanisms involved in destroying cell viability that form the basis of apoptosis. The protease calpain has also been shown to be involved in this type of cell destruction; just as the use of calpain inhibitors has been shown to decrease actin proteolysis and the degradation of DNA (another of the characteristic elements of apoptosis). On the other hand, the stress-induced triggering of apoptosis causes the reorganization of the actin cytoskeleton (which also involves its polymerization), giving rise to structures called stress fibers; this is activated by the MAP kinase pathway.
Cellular adhesion and development. The adhesion between cells is a characteristic of multicellular organisms that enables tissue specialization and therefore increases cell complexity. Adhesion of cell epithelia involves the actin cytoskeleton in each of the joined cells as well as cadherins acting as extracellular elements with the connection between the two mediated by catenins. Interfering in actin dynamics has repercussions for an organism's development, in fact actin is such a crucial element that systems of redundant genes are available. For example, if the α-actinin or gelation factor gene has been removed in Dictyostelium individuals do not show an anomalous phenotype possibly due to the fact that each of the proteins can perform the function of the other. However, the development of double mutations that lack both gene types is affected.
Gene expression modulation. Actin's state of polymerization affects the pattern of gene expression. In 1997, it was discovered that cytocalasin D-mediated depolymerization in Schwann cells causes a specific pattern of expression for the genes involved in the myelinization of this type of nerve cell. F-actin has been shown to modify the transcriptome in some of the life stages of unicellular organisms, such as the fungus Candida albicans. In addition, proteins that are similar to actin play a regulatory role during spermatogenesis in mice and, in yeasts, actin-like proteins are thought to play a role in the regulation of gene expression. In fact, actin is capable of acting as a transcription initiator when it reacts with a type of nuclear myosin that interacts with RNA polymerases and other enzymes involved in the transcription process.
Stereocilia dynamics. Some cells develop fine filiform outgrowths on their surface that have a mechanosensory function. For example, this type of organelle is present in the Organ of Corti, which is located in the ear. The main characteristic of these structures is that their length can be modified. The molecular architecture of the stereocilia includes a paracrystalline actin core in dynamic equilibrium with the monomers present in the adjacent cytosol. Type VI and VIIa myosins are present throughout this core, while myosin XVa is present in its extremities in quantities that are proportional to the length of the stereocilia.
Intrinsic chirality. Actomyosin networks have been implicated in generating an intrinsic chirality in individual cells. Cells grown out on chiral surfaces can show a directional left/right bias that is actomyosin dependent.
Structure
Monomeric actin, or G-actin, has a globular structure consisting of two lobes separated by a deep cleft. The bottom of the cleft represents the "ATPase fold", a structure conserved among ATP and GTP-binding proteins that binds to a magnesium ion and a molecule of ATP. Binding of ATP or ADP is required to stabilize each actin monomer; without one of these molecules bound, actin quickly becomes denatured.
The X-ray crystallography model of actin that was produced by Kabsch from the striated muscle tissue of rabbits is the most commonly used in structural studies as it was the first to be purified. The G-actin crystallized by Kabsch is approximately 67 x 40 x 37 Å in size, has a molecular mass of 41,785 Da and an estimated isoelectric point of 4.8. Its net charge at pH = 7 is -7.
Primary structure
Elzinga and co-workers first determined the complete peptide sequence for this type of actin in 1973, with later work by the same author adding further detail to the model. It contains 374 amino acid residues. Its N-terminus is highly acidic and starts with an acetyled aspartate in its amino group. While its C-terminus is alkaline and is formed by a phenylalanine preceded by a cysteine, which has a degree of functional importance. Both extremes are in close proximity within the I-subdomain. An anomalous Nτ-methylhistidine is located at position 73.
Tertiary structure — domains
The tertiary structure is formed by two domains known as the large and the small, which are separated by a cleft centred around the location of the bond with ATP-ADP+Pi. Below this there is a deeper notch called a "groove". In the native state, despite their names, both have a comparable depth.
The normal convention in topological studies means that a protein is shown with the biggest domain on the left-hand side and the smallest domain on the right-hand side. In this position the smaller domain is in turn divided into two: subdomain I (lower position, residues 1–32, 70–144, and 338–374) and subdomain II (upper position, residues 33–69). The larger domain is also divided in two: subdomain III (lower, residues 145–180 and 270–337) and subdomain IV (higher, residues 181–269). The exposed areas of subdomains I and III are referred to as the "barbed" ends, while the exposed areas of domains II and IV are termed the "pointed" ends. This nomenclature refers to the fact that, due to the small mass of subdomain II actin is polar; the importance of this will be discussed below in the discussion on assembly dynamics. Some authors call the subdomains Ia, Ib, IIa, and IIb, respectively.
Other important structures
The most notable supersecondary structure is a five chain beta sheet that is composed of a β-meander and a β-α-β clockwise unit. It is present in both domains suggesting that the protein arose from gene duplication.
The adenosine nucleotide binding site is located between two beta hairpin-shaped structures pertaining to the I and III domains. The residues that are involved are Asp11-Lys18 and Asp154-His161 respectively.
The divalent cation binding site is located just below that for the adenosine nucleotide. In vivo it is most often formed by Mg2+ or Ca2+ while in vitro it is formed by a chelating structure made up of Lys18 and two oxygens from the nucleotide's α-and β-phosphates. This calcium is coordinated with six water molecules that are retained by the amino acids Asp11, Asp154, and Gln137. They form a complex with the nucleotide that restricts the movements of the so-called "hinge" region, located between residues 137 and 144. This maintains the native form of the protein until its withdrawal denatures the actin monomer. This region is also important because it determines whether the protein's cleft is in the "open" or "closed" conformation.
It is highly likely that there are at least three other centres with a lesser affinity (intermediate) and still others with a low affinity for divalent cations. It has been suggested that these centres may play a role in the polymerization of actin by acting during the activation stage.
There is a structure in subdomain 2 that is called the "D-loop" because it binds with DNase I, it is located between the His40 and Gly48 residues. It has the appearance of a disorderly element in the majority of crystals, but it looks like a β-sheet when it is complexed with DNase I. It has been proposed that the key event in polymerization is probably the propagation of a conformational change from the centre of the bond with the nucleotide to this domain, which changes from a loop to a spiral. However, this hypothesis has been refuted by other studies.
F-actin
Under various conditions, G-actin molecules polymerize into longer threads called "filamentous-" or "F-actin". These F-actin threads are typically composed of two helical strands of actin wound around each other, forming a 7 to 9 nanometer wide helix that repeats every 72 nanometers (or every 14 G-actin subunits). In F-actin threads, G-actin molecules are all oriented in the same direction. The two ends of the F-actin thread are distinct from one another. At one end – designated the (−) end – the ATP-binding cleft of the terminal actin molecule is facing outward. At the opposite end – designated (+) – the ATP-binding cleft is buried in the filament, contacting the neighboring actin molecule. As F-actin threads grow, new molecules tend to join at the (+) end of an existing F-actin strand. Conversely, threads tend to shrink by shedding actin monomers from the strand's (−) end.
Some proteins, such as cofilin appear to increase the angle of turn, but again this could be interpreted as the establishment of different structural states. These could be important in the polymerization process.
There is less agreement regarding measurements of the turn radius and filament thickness: while the first models assigned a length of 25 Å, current X-ray diffraction data, backed up by cryo-electron microscopy suggests a length of 23.7 Å. These studies have shown the precise contact points between monomers. Some are formed with units of the same chain, between the "barbed" end on one monomer and the "pointed" end of the next one. While the monomers in adjacent chains make lateral contact through projections from subdomain IV, with the most important projections being those formed by the C-terminus and the hydrophobic link formed by three bodies involving residues 39–42, 201–203, and 286. This model suggests that a filament is formed by monomers in a "sheet" formation, in which the subdomains turn about themselves, this form is also found in the bacterial actin homologue MreB.
The terms "pointed" and "barbed" referring to the two ends of the microfilaments derive from their appearance under transmission electron microscopy when samples are examined following a preparation technique called "decoration". This method consists of the addition of myosin S1 fragments to tissue that has been fixed with tannic acid. This myosin forms polar bonds with actin monomers, giving rise to a configuration that looks like arrows with feather fletchings along its shaft, where the shaft is the actin and the fletchings are the myosin. Following this logic, the end of the microfilament that does not have any protruding myosin is called the point of the arrow (− end) and the other end is called the barbed end (+ end).
A S1 fragment is composed of the head and neck domains of myosin II. Under physiological conditions, G-actin (the monomer form) is transformed to F-actin (the polymer form) by ATP, where the role of ATP is essential.
The helical F-actin filament found in muscles also contains a tropomyosin molecule, which is a 40 nanometre long protein that is wrapped around the F-actin helix. During the resting phase the tropomyosin covers the actin's active sites so that the actin-myosin interaction cannot take place and produce muscular contraction. There are other protein molecules bound to the tropomyosin thread, these are the troponins that have three polymers: troponin I, troponin T, and troponin C.
F-actin is both strong and dynamic. Unlike other polymers, such as DNA, whose constituent elements are bound together with covalent bonds, the monomers of actin filaments are assembled by weaker bonds. The lateral bonds with neighbouring monomers resolve this anomaly, which in theory should weaken the structure as they can be broken by thermal agitation. In addition, the weak bonds give the advantage that the filament ends can easily release or incorporate monomers. This means that the filaments can be rapidly remodelled and can change cellular structure in response to an environmental stimulus. Which, along with the biochemical mechanism by which it is brought about is known as the "assembly dynamic".
Folding
Actin can spontaneously acquire a large part of its tertiary structure. However, the way it acquires its fully functional form from its newly synthesized native form is special and almost unique in protein chemistry. The reason for this special route could be the need to avoid the presence of incorrectly folded actin monomers, which could be toxic as they can act as inefficient polymerization terminators. Nevertheless, it is key to establishing the stability of the cytoskeleton, and additionally, it is an essential process for coordinating the cell cycle.
CCT is required in order to ensure that folding takes place correctly. CCT is a group II chaperonin, a large protein complex that assists in the folding of other proteins. CCT is formed of a double ring of eight different subunits (hetero-octameric) and it differs from group I chaperonins like GroEL, which is found in Eubacteria and in eukaryotic organelles, as it does not require a co-chaperone to act as a lid over the central catalytic cavity. Substrates bind to CCT through specific domains. It was initially thought that it only bound with actin and tubulin, although recent immunoprecipitation studies have shown that it interacts with a large number of polypeptides, which possibly function as substrates. It acts through ATP-dependent conformational changes that on occasion require several rounds of liberation and catalysis in order to complete a reaction.
In order to successfully complete their folding, both actin and tubulin need to interact with another protein called prefoldin, which is a heterohexameric complex (formed by six distinct subunits), in an interaction that is so specific that the molecules have coevolved. Actin complexes with prefoldin while it is still being formed, when it is approximately 145 amino acids long, specifically those at the N-terminal.
Different recognition sub-units are used for actin or tubulin although there is some overlap. In actin the subunits that bind with prefoldin are probably PFD3 and PFD4, which bind in two places one between residues 60–79 and the other between residues 170–198. The actin is recognized, loaded, and delivered to the cytosolic chaperonin (CCT) in an open conformation by the inner end of prefoldin's "tentacles" (see the image and note). The contact when actin is delivered is so brief that a tertiary complex is not formed, immediately freeing the prefoldin.
The CCT then causes actin's sequential folding by forming bonds with its subunits rather than simply enclosing it in its cavity. This is why it possesses specific recognition areas in its apical β-domain. The first stage in the folding consists of the recognition of residues 245–249. Next, other determinants establish contact. Both actin and tubulin bind to CCT in open conformations in the absence of ATP. In actin's case, two subunits are bound during each conformational change, whereas for tubulin binding takes place with four subunits. Actin has specific binding sequences, which interact with the δ and β-CCT subunits or with δ-CCT and ε-CCT. After AMP-PNP is bound to CCT the substrates move within the chaperonin's cavity. It also seems that in the case of actin, the CAP protein is required as a possible cofactor in actin's final folding states.
The exact manner by which this process is regulated is still not fully understood, but it is known that the protein PhLP3 (a protein similar to phosducin) inhibits its activity through the formation of a tertiary complex.
ATPase's catalytic mechanism
Actin is an ATPase, which means that it is an enzyme that hydrolyzes ATP. This group of enzymes is characterised by their slow reaction rates. It is known that this ATPase is "active", that is, its speed increases by some 40,000 times when the actin forms part of a filament. A reference value for this rate of hydrolysis under ideal conditions is around 0.3 s−1. Then, the Pi remains bound to the actin next to the ADP for a long time, until it is cooperatively liberated from the interior of the filament.
The exact molecular details of the catalytic mechanism are still not fully understood. Although there is much debate on this issue, it seems certain that a "closed" conformation is required for the hydrolysis of ATP, and it is thought that the residues that are involved in the process move to the appropriate distance. The glutamic acid Glu137 is one of the key residues, which is located in subdomain 1. Its function is to bind the water molecule that produces a nucleophilic attack on the ATP's γ-phosphate bond, while the nucleotide is strongly bound to subdomains 3 and 4. The slowness of the catalytic process is due to the large distance and skewed position of the water molecule in relation to the reactant. It is highly likely that the conformational change produced by the rotation of the domains between actin's G and F forms moves the Glu137 closer allowing its hydrolysis. This model suggests that the polymerization and ATPase's function would be decoupled straight away. The "open" to "closed" transformation between G and F forms and its implications on the relative motion of several key residues and the formation of water wires have been characterized in molecular dynamics and QM/MM simulations.
Assembly dynamics
Actin filaments are often rapidly assembled and disassembled, allowing them to generate force and support cell movement. Assembly classically occurs in three steps. First, the "nucleation phase", in which two to three G-actin molecules slowly join to form a small oligomer that will nucleate further growth. Second, the "elongation phase", when the actin filament rapidly grows by the addition of many actin molecules to both ends. As the filament grows, actin molecules are added to the (+) end of the filament around 10 times faster than to the (−) end, and so filaments tend to primarily grow at the (+) end. Third, the "steady-state phase", where an equillibrium is reached as actin molecules join and leave the filament at the same rate, maintaining the filament's length. While the filament's length remains constant in the steady-state phase, new molecules are constantly being added to the (+) end and falling off the (−) end, a phenomenon called "treadmilling" as a given actin molecule would appear to move along the strand. In isolation, whether a filament will grow or shrink, and how quickly, are determined by the concentration of G-actin around the filament; however, in cells, the dynamics of actin filaments are heavily influenced by various actin-binding proteins.
Actin binding proteins
The actin cytoskeleton in vivo is not exclusively composed of actin, other proteins are required for its formation, continuance, and function. These proteins are called actin-binding proteins and they are involved in actin's polymerization, depolymerization, stability, and organisation. The diversity of these proteins is such that actin is thought to be the protein that takes part in the greatest number of protein-protein interactions.
The nucleation of new actin filaments – the rate-limiting step in actin polymerization – is aided by actin-nucleating proteins such as formins (like formin-2) and the Arp2/3 complex. Formins help to nucleate long actin filaments. They bind two free actin-ATP molecules, bringing them together. Then as the filament begins to grow, formin moves along the (+) end of the growing filament, all the while recruiting actin-binding proteins that promote filament growth, and excluding capping proteins that would block filament extension. Branches in actin filaments are typically nucleated by the Arp2/3 complex in concert with nucleation promoting factors. Nucleation promoting factors bind two free G-actin molecules, then recruit and activate the Arp2/3 complex. The activated Arp2/3 complex attaches to an existing actin filament, and uses the two bound G-actin molecules to nucleate a new actin filament branching off of the old one at a 70° angle.
As filaments grow, the pool of available G-actin molecules is managed by G-actin-binding proteins such as profilin and thymosin β-4. Profilin ensures a supply of available actin-ATP by binding to ADP-bound G-actin and promoting the exchange of ADP for ATP. Profilin's binding to the actin molecule physically blocks its addition to a filament's (−) end, but permits it to join the (+) end. Once the actin-ATP has joined the filament, profilin releases it. As formins promote the nucleation and extension of new actin filaments, they recruit profilin to the area, increasing the local concentration of actin-ATP to boost filament growth. In contrast, thymosin β-4 binds and sequesters actin-ATP, preventing it from joining a microfilament.
Once an actin fiber is established, the dynamics of its growth or collapse are influenced by numerous proteins. Existing strands can be interrupted by filament cleaving proteins, such as cofilin and gelsolin. Cofilin binds along two actin-ADP molecules in a filament, forcing a movement that destabilizes the filament and causes it to break. Gelsolin inserts itself between actin molecules in a filament, disrupting the filament. After the filament breaks, gelsolin remains attached to the new (+) end, preventing it from growing, thus forcing its disassembly.
Other proteins bind to the ends of actin filaments, stabilizing them. These are called "capping proteins" and include CapZ and tropomodulin. CapZ binds the (+) end of a filament, preventing further addition or loss of actin from that end. Tropomodulin binds to a filament's (−) end, again preventing addition or loss of molecule's at that end. Tropomodulin is typically found in cells that require extremely stable actin filaments, such as those in muscle and red blood cells.
These actin binding proteins are typically regulated by various cellular signals to control actin assembly dynamics in different cellular locations. Formins, for example, are typically folded in an inactive conformation until they're activated by the binding of the small GTPase Rho. Actin branching at the cell membrane is important for cell movement, and so the plasma membrane lipid PIP2 activates the nucleation promoting factor WASp and inhibits CapZ. WASp is also activated by the small GTPase Cdc42, while another nucleation promoting factor WAVE is activated by the GTPase Rac1.
Genetics
Although most yeasts have only a single actin gene, higher eukaryotes, in general, express several isoforms of actin encoded by a family of related genes. Mammals have at least six actin isoforms coded by separate genes, which are divided into three classes – alpha, beta, and gamma – according to their isoelectric points. In general, alpha actins are found in muscle (α-skeletal, α-aortic smooth, α-cardiac), whereas beta and gamma isoforms are prominent in non-muscle cells (β-cytoplasmic, γ1-cytoplasmic, γ2-enteric smooth). Although the amino acid sequences and in vitro properties of the isoforms are highly similar, these isoforms cannot completely substitute for one another in vivo. Plants contains more than 60 actin genes and pseudogenes.
The typical actin gene has an approximately 100-nucleotide 5' UTR, a 1200-nucleotide translated region, and a 200-nucleotide 3' UTR. The majority of actin genes are interrupted by introns, with up to six introns in any of 19 well-characterised locations. The high conservation of the family makes actin the favoured model for studies comparing the introns-early and introns-late models of intron evolution.
Evolution
Actin and closely related proteins are present in all organisms, suggesting the common ancestor of all life on Earth had actin. Actin is one of the most conserved proteins throughout the evolution of eukaryotes. The sequences of actin proteins from animals and amoebae are 80% identical despite being separated by approximately one billion years of evolution. Many unicellular eukaryotes have a single actin gene, while multicellular eukaryotes often have several closely related genes that serve specialized functions. Humans have six; plants have 10 or more. In addition to actin, eukaryotes have a large family of actin-related proteins, or "Arps", that share a common ancestor with actin and are called Arp1–Arp11, with Arp1 the most closely related to actin, and Arp11 the least.
Bacteria encode three types of actin: MreB influences cell shape, FtsA cell division, and ParM separation of large plasmids. Some archaea have a bacteria-like MreB gene, while others have an actin gene that more closely resembles eukaryote actin.
The eukaryotic cytoskeleton of organisms among all taxonomic groups have similar components to actin and tubulin. For example, the protein that is coded by the ACTG2 gene in humans is completely equivalent to the homologues present in rats and mice, even though at a nucleotide level the similarity decreases to 92%. However, there are major differences with the equivalents in prokaryotes (FtsZ and MreB), where the similarity between nucleotide sequences is between 40 and 50% among different bacteria and archaea species. Some authors suggest that the ancestral protein that gave rise to the model eukaryotic actin resembles the proteins present in modern bacterial cytoskeletons.
Some authors point out that the behaviour of actin, tubulin, and histone, a protein involved in the stabilization and regulation of DNA, are similar in their ability to bind nucleotides and in their ability of take advantage of Brownian motion. It has also been suggested that they all have a common ancestor. Therefore, evolutionary processes resulted in the diversification of ancestral proteins into the varieties present today, conserving, among others, actins as efficient molecules that were able to tackle essential ancestral biological processes, such as endocytosis.
The Arp2/3 complex is widely found in all eukaryotic organisms.
Equivalents in prokaryotes
The bacterial cytoskeleton contains proteins that are highly similar to actin monomers and polymers. The bacterial protein MreB polymerizes into thin non-helical filaments and occasionally into helical structures similar to F-actin. Furthermore, its crystalline structure is very similar to that of G-actin (in terms of its three-dimensional conformation), there are even similarities between the MreB protofilaments and F-actin. The bacterial cytoskeleton also contains the FtsZ proteins, which are similar to tubulin.
Bacteria therefore possess a cytoskeleton with homologous elements to actin (for example, MreB, AlfA, ParM, FtsA, and MamK), even though the amino acid sequence of these proteins diverges from that present in animal cells. However, such proteins have a high degree of structural similarity to eukaryotic actin. The highly dynamic microfilaments formed by the aggregation of MreB and ParM are essential to cell viability and they are involved in cell morphogenesis, chromosome segregation, and cell polarity. ParM is an actin homologue that is coded in a plasmid and it is involved in the regulation of plasmid DNA. ParMs from different bacterial plasmids can form astonishingly diverse helical structures comprising two or four strands to maintain faithful plasmid inheritance.
In archaea the homologue Ta0583 is even more similar to the eukaryotic actins.
Molecular pathology
The majority of mammals possess six different actin genes. Of these, two code for the cytoskeleton (ACTB and ACTG1) while the other four are involved in skeletal striated muscle (ACTA1), smooth muscle tissue (ACTA2), intestinal muscles (ACTG2) and cardiac muscle (ACTC1). The actin in the cytoskeleton is involved in the pathogenic mechanisms of many infectious agents, including HIV. The vast majority of the mutations that affect actin are point mutations that have a dominant effect, with the exception of six mutations involved in nemaline myopathy. This is because in many cases the mutant of the actin monomer acts as a "cap" by preventing the elongation of F-actin.
Pathology associated with ACTA1
ACTA1 is the gene that codes for the α-isoform of actin that is predominant in human skeletal striated muscles, although it is also expressed in heart muscle and in the thyroid gland. Its DNA sequence consists of seven exons that produce five known transcripts. The majority of these consist of point mutations causing substitution of amino acids. The mutations are in many cases associated with a phenotype that determines the severity and the course of the affliction.
The mutation alters the structure and function of skeletal muscles producing one of three forms of myopathy: type 3 nemaline myopathy, congenital myopathy with an excess of thin myofilaments (CM) and congenital myopathy with fibre type disproportion (CMFTD). Mutations have also been found that produce core myopathies. Although their phenotypes are similar, in addition to typical nemaline myopathy some specialists distinguish another type of myopathy called actinic nemaline myopathy. In the former, clumps of actin form instead of the typical rods. It is important to state that a patient can show more than one of these phenotypes in a biopsy. The most common symptoms consist of a typical facial morphology (myopathic facies), muscular weakness, a delay in motor development and respiratory difficulties. The course of the illness, its gravity, and the age at which it appears are all variable and overlapping forms of myopathy are also found. A symptom of nemaline myopathy is that "nemaline rods" appear in differing places in type 1 muscle fibres. These rods are non-pathognomonic structures that have a similar composition to the Z disks found in the sarcomere.
The pathogenesis of this myopathy is very varied. Many mutations occur in the region of actin's indentation near to its nucleotide binding sites, while others occur in Domain 2, or in the areas where interaction occurs with associated proteins. This goes some way to explain the great variety of clumps that form in these cases, such as Nemaline or Intranuclear Bodies or Zebra Bodies. Changes in actin's folding occur in nemaline myopathy as well as changes in its aggregation and there are also changes in the expression of other associated proteins. In some variants where intranuclear bodies are found the changes in the folding masks the nucleus's protein exportation signal so that the accumulation of actin's mutated form occurs in the cell nucleus. On the other hand, it appears that mutations to ACTA1 that give rise to a CFTDM have a greater effect on sarcomeric function than on its structure. Recent investigations have tried to understand this apparent paradox, which suggests there is no clear correlation between the number of rods and muscular weakness. It appears that some mutations are able to induce a greater apoptosis rate in type II muscular fibres.
In smooth muscle
There are two isoforms that code for actins in the smooth muscle tissue:
ACTG2 codes for the largest actin isoform, which has nine exons, one of which, the one located at the 5' end, is not translated. It is a γ-actin that is expressed in the enteric smooth muscle. No mutations to this gene have been found that correspond to pathologies, although microarrays have shown that this protein is more often expressed in cases that are resistant to chemotherapy using cisplatin.
ACTA2 codes for an α-actin located in the smooth muscle, and also in vascular smooth muscle. It has been noted that the MYH11 mutation could be responsible for at least 14% of hereditary thoracic aortic aneurisms particularly Type 6. This is because the mutated variant produces an incorrect filamentary assembly and a reduced capacity for vascular smooth muscle contraction. Degradation of the aortic media has been recorded in these individuals, with areas of disorganization and hyperplasia as well as stenosis of the aorta's vasa vasorum. The number of afflictions that the gene is implicated in is increasing. It has been related to Moyamoya disease and it seems likely that certain mutations in heterozygosis could confer a predisposition to many vascular pathologies, such as thoracic aortic aneurysm and ischaemic heart disease. The α-actin found in smooth muscles is also an interesting marker for evaluating the progress of liver cirrhosis.
In heart muscle
The ACTC1 gene codes for the α-actin isoform present in heart muscle. It was first sequenced by Hamada and co-workers in 1982, when it was found that it is interrupted by five introns. It was the first of the six genes where alleles were found that were implicated in pathological processes.
A number of structural disorders associated with point mutations of this gene have been described that cause malfunctioning of the heart, such as Type 1R dilated cardiomyopathy and Type 11 hypertrophic cardiomyopathy. Certain defects of the atrial septum have been described recently that could also be related to these mutations.
Two cases of dilated cardiomyopathy have been studied involving a substitution of highly conserved amino acids belonging to the protein domains that bind and intersperse with the Z discs. This has led to the theory that the dilation is produced by a defect in the transmission of contractile force in the myocytes.
The mutations in ACTC1 are responsible for at least 5% of hypertrophic cardiomyopathies. The existence of a number of point mutations have also been found:
Mutation E101K: changes of net charge and formation of a weak electrostatic link in the actomyosin-binding site.
P166A: interaction zone between actin monomers.
A333P: actin-myosin interaction zone.
Pathogenesis appears to involve a compensatory mechanism: the mutated proteins act like toxins with a dominant effect, decreasing the heart's ability to contract causing abnormal mechanical behaviour such that the hypertrophy, that is usually delayed, is a consequence of the cardiac muscle's normal response to stress.
Recent studies have discovered ACTC1 mutations that are implicated in two other pathological processes: Infantile idiopathic restrictive cardiomyopathy, and noncompaction of the left ventricular myocardium.
In cytoplasmatic actins
ACTB is a highly complex locus. A number of pseudogenes exist that are distributed throughout the genome, and its sequence contains six exons that can give rise to up to 21 different transcriptions by alternative splicing, which are known as the β-actins. Consistent with this complexity, its products are also found in a number of locations and they form part of a wide variety of processes (cytoskeleton, NuA4 histone-acyltransferase complex, cell nucleus) and in addition they are associated with the mechanisms of a great number of pathological processes (carcinomas, juvenile dystonia, infection mechanisms, nervous system malformations and tumour invasion, among others). A new form of actin has been discovered, kappa actin, which appears to substitute for β-actin in processes relating to tumours.
Three pathological processes have so far been discovered that are caused by a direct alteration in gene sequence:
Hemangiopericytoma with t(7;12)(p22;q13)-translocations is a rare affliction, in which a translocational mutation causes the fusion of the ACTB gene over GLI1 in Chromosome 12.
Juvenile onset dystonia is a rare degenerative disease that affects the central nervous system; in particular, it affects areas of the neocortex and thalamus, where rod-like eosinophilic inclusions are formed. The affected individuals represent a phenotype with deformities on the median line, sensory hearing loss and dystonia. It is caused by a point mutation in which the amino acid tryptophan replaces arginine in position 183. This alters actin's interaction with the ADF/cofilin system, which regulates the dynamics of nerve cell cytoskeleton formation.
A dominant point mutation has also been discovered that causes neutrophil granulocyte dysfunction and recurring infections. It appears that the mutation modifies the domain responsible for binding between profilin and other regulatory proteins. Actin's affinity for profilin is greatly reduced in this allele.
The ACTG1 locus codes for the cytosolic γ-actin protein that is responsible for the formation of cytoskeletal microfilaments. It contains six exons, giving rise to 22 different mRNAs, which produce four complete isoforms whose form of expression is probably dependent on the type of tissue they are found in. It also has two different DNA promoters. It has been noted that the sequences translated from this locus and from that of β-actin are very similar to the predicted ones, suggesting a common ancestral sequence that suffered duplication and genetic conversion.
In terms of pathology, it has been associated with processes such as amyloidosis, retinitis pigmentosa, infection mechanisms, kidney diseases, and various types of congenital hearing loss.
Six autosomal-dominant point mutations in the sequence have been found to cause various types of hearing loss, particularly sensorineural hearing loss linked to the DFNA 20/26 locus. It seems that they affect the stereocilia of the ciliated cells present in the inner ear's Organ of Corti. β-actin is the most abundant protein found in human tissue, but it is not very abundant in ciliated cells, which explains the location of the pathology. On the other hand, it appears that the majority of these mutations affect the areas involved in linking with other proteins, particularly actomyosin. Some experiments have suggested that the pathological mechanism for this type of hearing loss relates to the F-actin in the mutations being more sensitive to cofilin than normal.
However, although there is no record of any case, it is known that γ-actin is also expressed in skeletal muscles, and although it is present in small quantities, model organisms have shown that its absence can give rise to myopathies.
Other pathological mechanisms
Some infectious agents use actin, especially cytoplasmic actin, in their life cycle. Two basic forms are present in bacteria:
Listeria monocytogenes, some species of Rickettsia, Shigella flexneri and other intracellular germs escape from phagocytic vacuoles by coating themselves with a capsule of actin filaments. L. monocytogenes and S. flexneri both generate a tail in the form of a "comet tail" that gives them mobility. Each species exhibits small differences in the molecular polymerization mechanism of their "comet tails". Different displacement velocities have been observed, for example, with Listeria and Shigella found to be the fastest. Many experiments have demonstrated this mechanism in vitro. This indicates that the bacteria are not using a myosin-like protein motor, and it appears that their propulsion is acquired from the pressure exerted by the polymerization that takes place near to the microorganism's cell wall. The bacteria have previously been surrounded by ABPs from the host, and as a minimum the covering contains Arp2/3 complex, Ena/VASP proteins, cofilin, a buffering protein and nucleation promoters, such as vinculin complex. Through these movements they form protrusions that reach the neighbouring cells, infecting them as well so that the immune system can only fight the infection through cell immunity. The movement could be caused by the modification of the curve and debranching of the filaments. Other species, such as Mycobacterium marinum and Burkholderia pseudomallei, are also capable of localized polymerization of cellular actin to aid their movement through a mechanism that is centered on the Arp2/3 complex. In addition the vaccine virus Vaccinia also uses elements of the actin cytoskeleton for its dissemination.
Pseudomonas aeruginosa is able to form a protective biofilm in order to escape a host organism's defences, especially white blood cells and antibiotics. The biofilm is constructed using DNA and actin filaments from the host organism.
In addition to the previously cited example, actin polymerization is stimulated in the initial steps of the internalization of some viruses, notably HIV, by, for example, inactivating the cofilin complex.
The role that actin plays in the invasion process of cancer cells has still not been determined.
In conditions of high lipoperoxidation, actin has been shown to be post-translationally modified by the lipoperoxidation product 4-hydroxynonenal (4-HNE). This modification prevents the remodelling of the actin cytoskeleton, which is essential for cell motility. Additionally, another functional protein, coronin-1A, which stabilizes F-actin filaments, is also covalently modified by 4-HNE. These modifications may impair immune cell trans-endothelial migration or their phagocytic ability, potentially leading to a decreased immune response in diseases characterized by high oxidative stress, such as malaria, cancer, metabolic syndrome, atherosclerosis, Alzheimer’s disease, rheumatoid arthritis, neurodegenerative diseases, and preeclampsia.
Applications
Actin is used in scientific and technological laboratories as a track for molecular motors such as myosin (either in muscle tissue or outside it) and as a necessary component for cellular functioning. It can also be used as a diagnostic tool, as several of its anomalous variants are related to the appearance of specific pathologies.
Nanotechnology. Actin-myosin systems act as molecular motors that permit the transport of vesicles and organelles throughout the cytoplasm. It is possible that actin could be applied to nanotechnology as its dynamic ability has been harnessed in a number of experiments including those carried out in acellular systems. The underlying idea is to use the microfilaments as tracks to guide molecular motors that can transport a given load. That is actin could be used to define a circuit along which a load can be transported in a more or less controlled and directed manner. In terms of general applications, it could be used for the directed transport of molecules for deposit in determined locations, which would permit the controlled assembly of nanostructures. These attributes could be applied to laboratory processes such as on lab-on-a-chip, in nanocomponent mechanics and in nanotransformers that convert mechanical energy into electrical energy.
Actin is used as an internal control in western blots to ascertain that equal amounts of protein have been loaded on each lane of the gel. In the blot example shown on the left side, 75 μg of total protein was loaded in each well. The blot was reacted with anti-β-actin antibody (for other details of the blot see the reference )
The use of actin as an internal control is based on the assumption that its expression is practically constant and independent of experimental conditions. By comparing the expression of the gene of interest to that of the actin, it is possible to obtain a relative quantity that can be compared between different experiments, whenever the expression of the latter is constant. It is worth pointing out that actin does not always have the desired stability in its gene expression.
Health. Some alleles of actin cause diseases; for this reason techniques for their detection have been developed. In addition, actin can be used as an indirect marker in surgical pathology: it is possible to use variations in the pattern of its distribution in tissue as a marker of invasion in neoplasia, vasculitis, and other conditions. Further, due to actin's close association with the apparatus of muscular contraction its levels in skeletal muscle diminishes when these tissues atrophy, it can therefore be used as a marker of this physiological process.
Food technology. It is possible to determine the quality of certain processed foods, such as sausages, by quantifying the amount of actin present in the constituent meat. Traditionally, a method has been used that is based on the detection of 3-methylhistidine in hydrolyzed samples of these products, as this compound is present in actin and F-myosin's heavy chain (both are major components of muscle). The generation of this compound in flesh derives from the methylation of histidine residues present in both proteins.
History
Actin was first observed experimentally in 1887 by W.D. Halliburton, who extracted a protein from muscle that 'coagulated' preparations of myosin that he called "myosin-ferment". However, Halliburton was unable to further refine his findings, and the discovery of actin is credited instead to Brunó Ferenc Straub, a young biochemist working in Albert Szent-Györgyi's laboratory at the Institute of Medical Chemistry at the University of Szeged, Hungary.
Following up on the discovery of Ilona Banga & Szent-Györgyi in 1941 that the coagulation only occurs in some myosin extractions and was reversed upon the addition of ATP, Straub identified and purified actin from those myosin preparations that did coagulate. Building on Banga's original extraction method, he developed a novel technique for extracting muscle protein that allowed him to isolate substantial amounts of relatively pure actin, published in 1942. Straub's method is essentially the same as that used in laboratories today. Since Straub's protein was necessary to activate the coagulation of myosin, it was dubbed actin. Realizing that Banga's coagulating myosin preparations contained actin as well, Szent-Györgyi called the mixture of both proteins actomyosin.
The hostilities of World War II meant Szent-Gyorgyi was unable to publish his lab's work in Western scientific journals. Actin therefore only became well known in the West in 1945, when their paper was published as a supplement to the Acta Physiologica Scandinavica. Straub continued to work on actin, and in 1950 reported that actin contains bound ATP and that, during polymerization of the protein into microfilaments, the nucleotide is hydrolyzed to ADP and inorganic phosphate (which remain bound to the microfilament). Straub suggested that the transformation of ATP-bound actin to ADP-bound actin played a role in muscular contraction. In fact, this is true only in smooth muscle, and was not supported through experimentation until 2001.
The amino acid sequencing of actin was completed by M. Elzinga and co-workers in 1973. The crystal structure of G-actin was solved in 1990 by Kabsch and colleagues. In the same year, a model for F-actin was proposed by Holmes and colleagues following experiments using co-crystallization with different proteins. The procedure of co-crystallization with different proteins was used repeatedly during the following years, until in 2001 the isolated protein was crystallized along with ADP. However, there is still no high-resolution X-ray structure of F-actin. The crystallization of G-actin was possible due to the use of a rhodamine conjugate that impedes polymerization by blocking the amino acid cys-374. Christine Oriol-Audit died in the same year that actin was first crystallized but she was the researcher that in 1977 first crystallized actin in the absence of Actin Binding Proteins (ABPs). However, the resulting crystals were too small for the available technology of the time.
Although no high-resolution model of actin's filamentous form currently exists, in 2008 Sawaya's team were able to produce a more exact model of its structure based on multiple crystals of actin dimers that bind in different places. This model has subsequently been further refined by Sawaya and Lorenz. Other approaches such as the use of cryo-electron microscopy and synchrotron radiation have recently allowed increasing resolution and better understanding of the nature of the interactions and conformational changes implicated in the formation of actin filaments.
Research
Chemical inhibitors
A number of natural toxins that interfere with actin's dynamics are widely used in research to study actin's role in biology. Latrunculin – a toxin produced by sponges – binds to G-actin preventing it from joining microfilaments. Cytochalasin D – produced by certain fungi – serves as a capping factor, binding to the (+) end of a filament and preventing further addition of actin molecules. In contrast, the sponge toxin jasplakinolide promotes the nucleation of new actin filaments by binding and stabilzing pairs of actin molecules. Phalloidin – from the "death cap" mushroom Amanita phalloides – binds to adjacent actin molecules within the F-actin filament, stabilizing the filament and preventing its depolymerization.
Phalloidin is often labelled with fluorescent dyes to visualize actin filaments by fluorescence microscopy.
| Biology and health sciences | Cell parts | Biology |
1803999 | https://en.wikipedia.org/wiki/Dagger-axe | Dagger-axe | The dagger-axe () is a type of polearm that was in use from the Longshan culture until the Han dynasty in China. It consists of a dagger-shaped blade, mounted by its tang to a perpendicular wooden shaft. The earliest dagger-axe blades were made of stone. Later versions used bronze. Jade versions were also made for ceremonial use. There is a variant type with a divided two-part head, consisting of the usual straight blade and a scythe-like blade.
History
The dagger-axe was the first weapon in Chinese history that was not also a dual-use tool for hunting (such as the bow and arrow) or agriculture. Lacking a point for thrusting, the dagger-axe was used in the open where there was enough room to swing its long shaft. Its appearance on the Chinese battlefield predated the use of chariots and the later dominance of tightly packed infantry formations.
During the Zhou dynasty, the ji or Chinese halberd gradually became more common on the battlefield. The ji was developed from the dagger-axe by adding a spear head to the top of the shaft, thereby enabling the weapon to be used with a thrusting motion as well as a swinging motion. Later versions of the ji, starting in the Spring and Autumn period, combined the dagger-axe blade and spear head into a single piece.
By the Han dynasty, the more versatile ji had completely replaced the dagger-axe as a standard infantry weapon. The ji was later replaced by the spear as the primary polearm of the Chinese military. By the Warring States period, large masses of infantry fighting in close ranks using the spear or ji had displaced the small groups of aristocrats on foot or mounted in chariots who had previously dominated the battlefield.
Archaeology
Many excavated dagger-axes are ceremonial jade weapons found in the tombs of aristocrats. These examples are often found within coffins, possibly meant to serve as emblems of authority and power, or in some other ritualistic capacity. Sometimes they are found in a pit dug beneath a coffin, with a victim who was sacrificed to guard the tomb, where they presumably are intended to keep the spirit-guard armed. Normally only the head of a dagger-axe is found, with the shaft absent because of either decomposition or mechanical removal. Although the jade examples do not appear to have been intended for use in actual combat, their morphology closely imitates that of the battle-ready bronze version, including a sharp central ridge which reinforces the blade. Some dagger-axe artifacts are small and curved and could have been intended for use as pendants.
Gallery
| Technology | Polearms | null |
1804896 | https://en.wikipedia.org/wiki/Prunus%20serotina | Prunus serotina | Prunus serotina, commonly called black cherry, wild black cherry, rum cherry, or mountain black cherry, is a deciduous tree or shrub in the rose family Rosaceae. Despite its common names, it is not very closely related to commonly cultivated cherries. It is found in the Americas.
Description
Prunus serotina is a medium-sized, fast-growing forest tree growing to a height of . The leaves are long, ovate-lanceolate in shape, with finely toothed margins. Fall leaf color is yellow to red. Flowers are small, white and 5-petalled, in racemes long which contain several dozen flowers. The flowers give rise to reddish-black "berries" (drupes) fed on by birds, in diameter.
For about its first decade the bark of a black cherry tree is thin, smooth, and banded, resembling a birch. A mature tree has very broken, dark gray to black bark. The leaves are long and shiny, resembling a sourwood's. An almond-like odour is released when a young twig is scratched and held close to the nose, revealing minute amounts of cyanide compounds produced and stored by the plant as a defense mechanism against herbivores.
Biochemistry
Like apricots and apples, the seeds of black cherries contain cyanogenic glycosides (compounds that can be converted into cyanide), such as amygdalin. These compounds release hydrogen cyanide when the seed is ground or minced, which releases enzymes that break down the compounds. These enzymes include amygdalin beta-glucosidase, prunasin beta-glucosidase and mandelonitrile lyase. In contrast, although the flesh of black cherries also contains these glycosides, it does not contain the enzymes needed to convert them to cyanide, so the flesh is safe to eat.
The foliage, particularly when wilted, also contains cyanogenic glycosides, which convert to hydrogen cyanide if eaten by animals. Farmers are recommended to remove any trees that fall in a field containing livestock, because the wilted leaves could poison the animals. Removal is not always practical, though, because these trees often grow in very large numbers on farms, taking advantage of the light brought about by mowing and grazing. Entire fencerows can be lined with this poisonous tree, making it difficult to monitor all the branches falling into the grazing area. Black cherry is a leading cause of livestock illness, and grazing animals' access to it should be limited.
Similar species
Black cherry is closely related to the chokecherry (P. virginiana), which tends to be shorter (a shrub or small tree) and has smaller, less glossy leaves.
Subdivisions
Prunus serotina belongs to Prunus subg. Padus and has the following subspecies and varieties:
Prunus serotina subsp. capuli (Cav. ex Spreng.) McVaugh – central + southern Mexico
Prunus serotina subsp. eximia (Small) McVaugh – Texas
Prunus serotina subsp. hirsuta (Elliott) McVaugh (syn. Prunus serotina var. alabamensis (C. Mohr) Little) – southeastern United States
Prunus serotina subsp. serotina – Canada, United States, Mexico, Guatemala
Prunus serotina subsp. virens (Wooton & Standl.) McVaugh – southwestern United States, northern + central Mexico
Prunus serotina var. virens (Wooton & Standl.) McVaugh
Prunus serotina var. rufula (Wooton & Standl.) McVaugh
Distribution and habitat
The species is widespread and common in North America and South America.
Ecology
Prunus serotina is a pioneer species. In the Midwest, it is seen growing mostly in old fields with other sunlight-loving species, such as black walnut, black locust, and hackberry. Gleason and Cronquist (1991) describe P. serotina as "[f]ormerly a forest tree, now abundant as a weed-tree of roadsides, waste land, and forest-margins". It is a moderately long-lived tree, with ages of up to 258 years known, though it is prone to storm damage, with branches breaking easily; any decay resulting, however, only progresses slowly. Fruit production begins around 10 years of age, but does not become heavy until 30 years and continues up to 100 years or more. Germination rates are high, and the seeds are widely dispersed by birds and bears who eat the fruit and then excrete them. Some seeds however may remain in the soil bank and not germinate for as long as three years. All Prunus species have hard seeds that benefit from scarification to germinate (which in nature is produced by passing through an animal's digestive tract). The tree is hardy and can tolerate poor soils and oceanic salt sprays.
P. serotina hosts the caterpillars of more than 450 species of butterflies and moths, including those of the eastern tiger swallowtail (Papilio glaucus), cherry gall azure (Celastrina serotina), viceroy (Limenitis archippus), and red-spotted purple/white admiral (Limenitis arthemis) butterflies and the cecropia (Hyalophora cecropia), promethea (Callosamia promethea), polyphemus (Antheraea polyphemus), small-eyed sphinx (Paonias myops), wild cherry sphinx (Sphinx drupiferarum), banded tussock (Halysidota tessellaris), spotted apatelodes (Apatelodes torrefacta), and band-edged prominent moths.
Deer browse the foliage.
Pests and diseases
Hyphantria cunea can inhibit the impact of cyanide within the plants' leaves due to its alkaline stomach acid. The eastern tent caterpillar defoliates entire groves some springs.
Uses
Prunus serotina subsp. capuli was cultivated in Central and South America well before European contact.
Known as capolcuahuitl in Nahuatl (the source of the capuli epithet), it was an important food in pre-Columbian Mexico. Native Americans ate the fruit. Edible raw, the fruit is also made into jelly, and the juice can be used as a drink mixer, hence the common name 'rum cherry'.
Prunus serotina timber is valuable; perhaps the premier cabinetry timber of the U.S., traded as "cherry". High quality cherry timber is known for its strong orange hues, tight grain and high price. Low-quality wood, as well as the sap wood, can be more tan. Its density when dried is around .
Prunus serotina was widely introduced into Western and Central Europe as an ornamental tree in the mid-20th century, where it has become locally naturalized. It has acted as an invasive species there, negatively affecting forest community biodiversity and regeneration.
| Biology and health sciences | Stone fruits | Plants |
1804940 | https://en.wikipedia.org/wiki/Furry%20lobster | Furry lobster | Furry lobsters (sometimes called coral lobsters) are small decapod crustaceans, closely related to the slipper lobsters and spiny lobsters. The antennae are not as enlarged as in spiny and slipper lobsters, and the body is covered in short hairs, hence the name furry lobster. Although previously considered a family in their own right (Synaxidae Spence Bate, 1881), the furry lobsters were subsumed into the family Palinuridae in 1990. Subsequent molecular phylogenetics studies have confirmed that the furry lobsters genera don't form a natural group and were both nested among the spiny lobster genera in family Palinuridae. The family now includes the two furry lobster genera and ten spiny lobster genera.
Taxonomy
There are two genera, with three species between them:
Palinurellus gundlachi Von Martens, 1878 – Caribbean furry lobster, found in the Caribbean Sea and the Atlantic coast of South America; named for Juan Gundlach
Palinurellus wieneckii (De Man, 1881) – mole lobster, with an Indo-Pacific distribution
Palibythus magnificus P. J. F. Davie, 1990 – musical furry lobster, from the South Pacific (originally described from Samoa)
| Biology and health sciences | Crayfishes and lobsters | Animals |
1805751 | https://en.wikipedia.org/wiki/Vicinal%20%28chemistry%29 | Vicinal (chemistry) | In chemistry the descriptor vicinal (from Latin vicinus = neighbor), abbreviated vic, is a descriptor that identifies two functional groups as bonded to two adjacent carbon atoms (i.e., in a 1,2-relationship). It may arise from vicinal difunctionalization.
Relation of atoms in a molecule
For example, the molecule 2,3-dibromobutane carries two vicinal bromine atoms and 1,3-dibromobutane does not. Mostly, the use of the term vicinal is restricted to two identical functional groups.
Likewise in a gem-dibromide the prefix gem, an abbreviation of geminal, signals that both bromine atoms are bonded to the same carbon atom (i.e., in a 1,1-relationship). For example, 1,1-dibromobutane is geminal. While comparatively less common, the term hominal has been suggested as a descriptor for groups in a 1,3-relationship.
Like other descriptors, such as syn, anti, exo or endo, the description vicinal helps explain how different parts of a molecule are related to each other either structurally or spatially. The vicinal adjective is sometimes restricted to those molecules with two identical functional groups. The use of the term can also be extended to substituents on aromatic rings.
1H-NMR spectroscopy
In 1H-NMR spectroscopy, the coupling of two hydrogen atoms on adjacent carbon atoms is called vicinal coupling. The coupling constant 3J represents coupling of vicinal hydrogen atoms because they couple through three bonds. Depending on the other substituents, the vicinal coupling constant is typically a value between 0 and +20 Hz. The dependence of the vicinal coupling constant on the dihedral angle is described by the Karplus relation.
| Physical sciences | Concepts_2 | Chemistry |
20541773 | https://en.wikipedia.org/wiki/Wind%20turbine | Wind turbine | A wind turbine is a device that converts the kinetic energy of wind into electrical energy. , hundreds of thousands of large turbines, in installations known as wind farms, were generating over 650 gigawatts of power, with 60 GW added each year. Wind turbines are an increasingly important source of intermittent renewable energy, and are used in many countries to lower energy costs and reduce reliance on fossil fuels. One study claimed that, wind had the "lowest relative greenhouse gas emissions, the least water consumption demands and the most favorable social impacts" compared to photovoltaic, hydro, geothermal, coal and gas energy sources.
Smaller wind turbines are used for applications such as battery charging and remote devices such as traffic warning signs. Larger turbines can contribute to a domestic power supply while selling unused power back to the utility supplier via the electrical grid.
Wind turbines are manufactured in a wide range of sizes, with either horizontal or vertical axes, though horizontal is most common.
History
The windwheel of Hero of Alexandria (10–70 CE) marks one of the first recorded instances of wind powering a machine. However, the first known practical wind power plants were built in Sistan, an Eastern province of Persia (now Iran), from the 7th century. These "Panemone" were vertical axle windmills, which had long vertical drive shafts with rectangular blades. Made of six to twelve sails covered in reed matting or cloth material, these windmills were used to grind grain or draw up water, and were used in the gristmilling and sugarcane industries.
Wind power first appeared in Europe during the Middle Ages. The first historical records of their use in England date to the 11th and 12th centuries; there are reports of German crusaders taking their windmill-making skills to Syria around 1190. By the 14th century, Dutch windmills were in use to drain areas of the Rhine delta. Advanced wind turbines were described by Croatian inventor Fausto Veranzio in his book Machinae Novae (1595). He described vertical axis wind turbines with curved or V-shaped blades.
The first electricity-generating wind turbine was installed by the Austrian Josef Friedländer at the Vienna International Electrical Exhibition in 1883. It was a Halladay windmill for driving a dynamo. Friedländer's diameter Halladay "wind motor" was supplied by U.S. Wind Engine & Pump Co. of Batavia, Illinois. The windmill drove a dynamo at ground level that fed electricity into a series of batteries. The batteries powered various electrical tools and lamps, as well as a threshing machine. Friedländer's windmill and its accessories were prominently installed at the north entrance to the main exhibition hall ("Rotunde") in the Vienna Prater.
In July 1887, Scottish academic James Blyth installed a battery-charging machine to light his holiday home in Marykirk, Scotland. Some months later, American inventor Charles F. Brush was able to build the first automatically operated wind turbine after consulting local University professors and his colleagues Jacob S. Gibbs and Brinsley Coleberd and successfully getting the blueprints peer-reviewed for electricity production. Although Blyth's turbine was considered uneconomical in the United Kingdom, electricity generation by wind turbines was more cost effective in countries with widely scattered populations.
In Denmark by 1900, there were about 2500 windmills for mechanical loads such as pumps and mills, producing an estimated combined peak power of about 30 megawatts (MW). The largest machines were on towers with four-bladed diameter rotors. By 1908, there were 72 wind-driven electric generators operating in the United States from 5 kilowatts (kW) to 25 kW. Around the time of World War I, American windmill makers were producing 100,000 farm windmills each year, mostly for water-pumping.
By the 1930s, use of wind turbines in rural areas was declining as the distribution system extended to those areas.
A forerunner of modern horizontal-axis wind generators was in service at Yalta, USSR, in 1931. This was a 100 kW generator on a tower, connected to the local 6.3 kV distribution system. It was reported to have an annual capacity factor of 32 percent, not much different from current wind machines.
In the autumn of 1941, the first megawatt-class wind turbine was synchronized to a utility grid in Vermont. The Smith–Putnam wind turbine only ran for about five years before one of the blades snapped off. The unit was not repaired, because of a shortage of materials during the war.
The first utility grid-connected wind turbine to operate in the UK was built by John Brown & Company in 1951 in the Orkney Islands.
In the early 1970s, however, anti-nuclear protests in Denmark spurred artisan mechanics to develop microturbines of 22 kW despite declines in the industry. Organizing owners into associations and co-operatives led to the lobbying of the government and utilities and provided incentives for larger turbines throughout the 1980s and later. Local activists in Germany, nascent turbine manufacturers in Spain, and large investors in the United States in the early 1990s then lobbied for policies that stimulated the industry in those countries.
It has been argued that expanding the use of wind power will lead to increasing geopolitical competition over critical materials for wind turbines, such as rare earth elements neodymium, praseodymium, and dysprosium. However, this perspective has been critically dismissed for failing to relay how most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for the expanded production of these minerals.
Wind power density
Wind Power Density (WPD) is a quantitative measure of wind energy available at any location. It is the mean annual power available per square meter of swept area of a turbine, and is calculated for different heights above ground. Calculation of wind power density includes the effect of wind velocity and air density.
Wind turbines are classified by the wind speed they are designed for, from class I to class III, with A to C referring to the turbulence intensity of the wind.
Efficiency
Conservation of mass requires that the mass of air entering and exiting a turbine must be equal. Likewise, the conservation of energy requires the energy given to the turbine from incoming wind to be equal to that of the combination of the energy in the outgoing wind and the energy converted to electrical energy. Since outgoing wind will still possess some kinetic energy, there must be a maximum proportion of the input energy that is available to be converted to electrical energy. Accordingly, Betz's law gives the maximal achievable extraction of wind power by a wind turbine, known as Betz's coefficient, as (59.3%) of the rate at which the kinetic energy of the air arrives at the turbine.
The maximum theoretical power output of a wind machine is thus times the rate at which kinetic energy of the air arrives at the effective disk area of the machine. If the effective area of the disk is A, and the wind velocity v, the maximum theoretical power output P is:
,
where ρ is the air density.
Wind-to-rotor efficiency (including rotor blade friction and drag) are among the factors affecting the final price of wind power.
Further inefficiencies, such as gearbox, generator, and converter losses, reduce the power delivered by a wind turbine. To protect components from undue wear, extracted power is held constant above the rated operating speed as theoretical power increases as the cube of wind speed, further reducing theoretical efficiency. In 2001, commercial utility-connected turbines delivered 75% to 80% of the Betz limit of power extractable from the wind, at rated operating speed.
Efficiency can decrease slightly over time, one of the main reasons being dust and insect carcasses on the blades, which alter the aerodynamic profile and essentially reduce the lift to drag ratio of the airfoil. Analysis of 3128 wind turbines older than 10 years in Denmark showed that half of the turbines had no decrease, while the other half saw a production decrease of 1.2% per year.
In general, more stable and constant weather conditions (most notably wind speed) result in an average of 15% greater efficiency than that of a wind turbine in unstable weather conditions, thus allowing up to a 7% increase in wind speed under stable conditions. This is due to a faster recovery wake and greater flow entrainment that occur in conditions of higher atmospheric stability. However, wind turbine wakes have been found to recover faster under unstable atmospheric conditions as opposed to a stable environment.
Different materials have varying effects on the efficiency of wind turbines. In an Ege University experiment, three wind turbines, each with three blades with a diameter of one meter, were constructed with blades made of different materials: A glass and glass/carbon epoxy, glass/carbon, and glass/polyester. When tested, the results showed that the materials with higher overall masses had a greater friction moment and thus a lower power coefficient.
The air velocity is the major contributor to the turbine efficiency. This is the reason for the importance of choosing the right location. The wind velocity will be high near the shore because of the temperature difference between the land and the ocean. Another option is to place turbines on mountain ridges. The higher the wind turbine will be, the higher the wind velocity on average. A windbreak can also increase the wind velocity near the turbine.
Types
Wind turbines can rotate about either a horizontal or a vertical axis, the former being both older and more common. They can also include blades or be bladeless. Household-size vertical designs produce less power and are less common.
Horizontal axis
Large three-bladed horizontal-axis wind turbines (HAWT) with the blades upwind of the tower (blades facing the incoming wind) produce the overwhelming majority of wind power in the world today. These turbines have the main rotor shaft and electrical generator at the top of a tower and must be pointed into the wind. Small turbines are pointed by a simple wind vane, while large turbines generally use a wind sensor coupled with a yaw system. Most have a gearbox, which turns the slow rotation of the blades into a quicker rotation that is more suitable to drive an electrical generator. Some turbines use a different type of generator suited to slower rotational speed input. These don't need a gearbox and are called direct-drive, meaning they couple the rotor directly to the generator with no gearbox in between. While permanent magnet direct-drive generators can be more costly due to the rare earth materials required, these gearless turbines are sometimes preferred over gearbox generators because they "eliminate the gear-speed increaser, which is susceptible to significant accumulated fatigue torque loading, related reliability issues, and maintenance costs". There is also the pseudo direct drive mechanism, which has some advantages over the permanent magnet direct drive mechanism.
Most horizontal axis turbines have their rotors upwind of the supporting tower. Downwind machines have been built, because they don't need an additional mechanism for keeping them in line with the wind. In high winds, downwind blades can also be designed to bend more than upwind ones, which reduces their swept area and thus their wind resistance, mitigating risk during gales. Despite these advantages, upwind designs are preferred, because the pulsing change in loading from the wind as each blade passes behind the supporting tower can cause damage to the turbine.
Turbines used in wind farms for commercial production of electric power are usually three-bladed. These have low torque ripple, which contributes to good reliability. The blades are usually colored white for daytime visibility by aircraft and range in length from . The size and height of turbines increase year by year. Offshore wind turbines are built up to 8 MW today and have a blade length up to . Designs with 10 to 12 MW were in preparation in 2018, and a "15 MW+" prototype with three blades is planned to be constructed in 2022. The average hub height of horizontal axis wind turbines is 90 meters.
Vertical axis
Vertical-axis wind turbines (or VAWTs) have the main rotor shaft arranged vertically. One advantage of this arrangement is that the turbine does not need to be pointed into the wind to be effective, which is an advantage on a site where the wind direction is highly variable. It is also an advantage when the turbine is integrated into a building because it is inherently less steerable. Also, the generator and gearbox can be placed near the ground, using a direct drive from the rotor assembly to the ground-based gearbox, improving accessibility for maintenance. However, these designs produce much less energy averaged over time, which is a major drawback.
Vertical turbine designs have much lower efficiency than standard horizontal designs. The key disadvantages include the relatively low rotational speed with the consequential higher torque and hence higher cost of the drive train, the inherently lower power coefficient, the 360-degree rotation of the aerofoil within the wind flow during each cycle and hence the highly dynamic loading on the blade, the pulsating torque generated by some rotor designs on the drive train, and the difficulty of modelling the wind flow accurately and hence the challenges of analysing and designing the rotor prior to fabricating a prototype.
When a turbine is mounted on a rooftop the building generally redirects wind over the roof and this can double the wind speed at the turbine. If the height of a rooftop mounted turbine tower is approximately 50% of the building height it is near the optimum for maximum wind energy and minimum wind turbulence. While wind speeds within the built environment are generally much lower than at exposed rural sites, noise may be a concern and an existing structure may not adequately resist the additional stress.
Subtypes of the vertical axis design include:
Darrieus wind turbine
"Eggbeater" turbines, or Darrieus turbines, were named after the French inventor, Georges Darrieus. They have good efficiency, but produce large torque ripple and cyclical stress on the tower, which contributes to poor reliability. They also generally require some external power source, or an additional Savonius rotor to start turning, because the starting torque is very low. The torque ripple is reduced by using three or more blades, which results in greater solidity of the rotor. Solidity is measured by the blade area divided by the rotor area.
Giromill
A subtype of Darrieus turbine with straight, as opposed to curved, blades. The cycloturbine variety has variable pitch to reduce the torque pulsation and is self-starting. The advantages of variable pitch are high starting torque; a wide, relatively flat torque curve; a higher coefficient of performance; more efficient operation in turbulent winds; and a lower blade speed ratio, which lowers blade bending stresses. Straight, V, or curved blades may be used.
Savonius wind turbine
These are drag-type devices with two (or more) scoops that are used in anemometers, Flettner vents (commonly seen on bus and van roofs), and in some high-reliability low-efficiency power turbines. They are always self-starting if there are at least three scoops.
Twisted Savonius is a modified savonius, with long helical scoops to provide smooth torque. This is often used as a rooftop wind turbine and has even been adapted for ships.
Airborne wind turbine
Airborne wind turbines consist of wings or a small aircraft tethered to the ground. They are useful for reaching faster winds above which traditional turbines can operate. There are prototypes in operation in east Africa.
Floating wind turbine
These are offshore wind turbines that are supported by a floating platform. By having them float, they are able to be installed in deeper water allowing more of them. This also allows them to be further out of sight from land and therefore less public concern about the visual appeal.
Unconventional types
Design and construction
Wind turbine design is a careful balance of cost, energy output, and fatigue life.
Components
Wind turbines convert wind energy to electrical energy for distribution. Conventional horizontal axis turbines can be divided into three components:
The rotor, which is approximately 20% of the wind turbine cost, includes the blades for converting wind energy to low-speed rotational energy.
The generator, which is approximately 34% of the wind turbine cost, includes the electrical generator, the control electronics, and most likely a gearbox (e.g., planetary gear box), adjustable-speed drive, or continuously variable transmission component for converting the low-speed incoming rotation to high-speed rotation suitable for generating electricity.
The surrounding structure, which is approximately 15% of the wind turbine cost, includes the tower and rotor yaw mechanism.
A 1.5 (MW) wind turbine of a type frequently seen in the United States has a tower high. The rotor assembly (blades and hub) measures about in diameter. The nacelle, which contains the generator, is and weighs around 300 tons.
Turbine monitoring and diagnostics
Due to data transmission problems, structural health monitoring of wind turbines is usually performed using several accelerometers and strain gages attached to the nacelle to monitor the gearbox and equipment. Currently, digital image correlation and stereophotogrammetry are used to measure dynamics of wind turbine blades. These methods usually measure displacement and strain to identify location of defects. Dynamic characteristics of non-rotating wind turbines have been measured using digital image correlation and photogrammetry. Three dimensional point tracking has also been used to measure rotating dynamics of wind turbines.
Technology
Generally, efficiency increases along with turbine blade lengths. The blades must be stiff, strong, durable, light and resistant to fatigue. Materials with these properties include composites such as polyester and epoxy, while glass fiber and carbon fiber have been used for the reinforcing. Construction may involve manual layup or injection molding. Retrofitting existing turbines with larger blades reduces the task and risks of redesign.
As of 2021, the longest blade was , producing 15 MW.
Blades usually last around 20 years, the typical lifespan of a wind turbine.
Blade materials
Materials commonly used in wind turbine blades are described below.
Glass and carbon fibers
The stiffness of composites is determined by the stiffness of fibers and their volume content. Typically, E-glass fibers are used as main reinforcement in the composites. Typically, the glass/epoxy composites for wind turbine blades contain up to 75% glass by weight. This increases the stiffness, tensile and compression strength. A promising composite material is glass fiber with modified compositions like S-glass, R-glass etc. Other glass fibers developed by Owens Corning are ECRGLAS, Advantex and WindStrand.
Carbon fiber has more tensile strength, higher stiffness and lower density than glass fiber. An ideal candidate for these properties is the spar cap, a structural element of a blade that experiences high tensile loading. A glass fiber blade could weigh up to , while using carbon fiber in the spar saves 20% to 30% weight, about .
Hybrid reinforcements
Instead of making wind turbine blade reinforcements from pure glass or pure carbon, hybrid designs trade weight for cost. For example, for an blade, a full replacement by carbon fiber would save 80% of weight but increase costs by 150%, while a 30% replacement would save 50% of weight and increase costs by 90%. Hybrid reinforcement materials include E-glass/carbon, E-glass/aramid. The current longest blade by LM Wind Power is made of carbon/glass hybrid composites. More research is needed about the optimal composition of materials.
Nano-engineered polymers and composites
Additions of small amount (0.5 weight %) of nanoreinforcement (carbon nanotubes or nanoclay) in the polymer matrix of composites, fiber sizing or inter-laminar layers can improve fatigue resistance, shear or compressive strength, and fracture toughness of the composites by 30% to 80%. Research has also shown that incorporating small amounts of carbon nanotubes (CNT) can increase the lifetime up to 1500%.
Costs
, the capital cost of a wind turbine was around $1 million per megawatt of nameplate capacity, though this figure varies by location; for example, such numbers ranged from a half million in South America to $1.7 million in Asia.
For the wind turbine blades, while the material cost is much higher for hybrid glass/carbon fiber blades than all-glass fiber blades, labor costs can be lower. Using carbon fiber allows simpler designs that use less raw material. The chief manufacturing process in blade fabrication is the layering of plies. Thinner blades allow reducing the number of layers and thus the labor and in some cases, equate to the cost of labor for glass fiber blades.
Offshore has significantly higher installation costs.
Non-blade materials
Wind turbine parts other than the rotor blades (including the rotor hub, gearbox, frame, and tower) are largely made of steel. Smaller turbines (as well as megawatt-scale Enercon turbines) have begun using aluminum alloys for these components to make turbines lighter and more efficient. This trend may grow if fatigue and strength properties can be improved.
Pre-stressed concrete has been increasingly used for the material of the tower, but still requires much reinforcing steel to meet the strength requirement of the turbine. Additionally, step-up gearboxes are being increasingly replaced with variable speed generators, which requires magnetic materials.
Modern turbines use a couple of tons of copper for generators and cables and such. , global production of wind turbines use of copper per year.
Material supply
A 2015 study of the material consumption trends and requirements for wind energy in Europe found that bigger turbines have a higher consumption of precious metals but lower material input per kW generated. The material consumption and stock at that time was compared to input materials for various onshore system sizes. In all EU countries, the estimates for 2020 doubled the values consumed in 2009. These countries would need to expand their resources to meet the estimated demand for 2020. For example, the EU had 3% of world supply of fluorspar, and it would require 14% by 2020. Globally, the main exporting countries are South Africa, Mexico, and China. This is similar with other critical and valuable materials required for energy systems such as magnesium, silver and indium. The levels of recycling of these materials are very low, and focusing on that could alleviate supply. Because most of these valuable materials are also used in other emerging technologies, like light emitting diodes (LEDs), photo voltaics (PVs) and liquid crystal displays (LCDs), their demand is expected to grow.
A 2011 study by the United States Geological Survey estimated resources required to fulfill the US commitment to supplying 20% of its electricity from wind power by 2030. It did not consider requirements for small turbines or offshore turbines because those were not common in 2008 when the study was done. Common materials such as cast iron, steel and concrete would increase by 2%–3% compared to 2008. Between 110,000 and 115,000 metric tons of fiber glass would be required per year, a 14% increase. Rare-earth metal use would not increase much compared to available supply, however rare-earth metals that are also used for other technologies such as batteries which are increasing its global demand need to be taken into account. Land required would be 50,000 square kilometers onshore and 11,000 offshore. This would not be a problem in the US due to its vast area and because the same land can be used for farming. A greater challenge would be the variability and transmission to areas of high demand.
Permanent magnets for wind turbine generators contain rare-earth metals such as neodymium (Nd), praseodymium (Pr), terbium (Tb), and dysprosium (Dy). Systems that use magnetic direct drive turbines require greater amounts of rare-earth metals. Therefore, an increase in wind turbine manufacture would increase the demand for these resources. By 2035, the demand for Nd is estimated to increase by 4,000 to 18,000 tons and for Dy by 200 to 1,200 tons. These values are a quarter to half of current production. However, these estimates are very uncertain because technologies are developing rapidly.
Reliance on rare earth minerals for components has risked expense and price volatility as China has been main producer of rare earth minerals (96% in 2009) and was reducing its export quotas. However, in recent years, other producers have increased production and China has increased export quotas, leading to higher supply, lower cost, and greater viability of large-scale use of variable-speed generators.
Glass fiber is the most common material for reinforcement. Its demand has grown due to growth in construction, transportation and wind turbines. Its global market might reach US$17.4 billion by 2024, compared to US$8.5 billion in 2014. In 2014, Asia Pacific produced more than 45% of the market; now China is the largest producer. The industry receives subsidies from the Chinese government allowing it to export cheaper to the US and Europe. However, price wars have led to anti-dumping measures such as tariffs on Chinese glass fiber.
Wind turbines on public display
A few localities have exploited the attention-getting nature of wind turbines by placing them on public display, either with visitor centers around their bases, or with viewing areas farther away. The wind turbines are generally of conventional horizontal-axis, three-bladed design and generate power to feed electrical grids, but they also serve the unconventional roles of technology demonstration, public relations, and education.
The Bahrain World Trade Center is an example of wind turbines displayed prominently for the public. It is the first skyscraper to integrate wind turbines into its design
Small wind turbines
Small wind turbines may be used for a variety of applications including on- or off-grid residences, telecom towers, offshore platforms, rural schools and clinics, remote monitoring and other purposes that require energy where there is no electric grid, or where the grid is unstable. Small wind turbines may be as small as a fifty-watt generator for boat or caravan use. Hybrid solar- and wind-powered units are increasingly being used for traffic signage, particularly in rural locations, since they avoid the need to lay long cables from the nearest mains connection point. The U.S. Department of Energy's National Renewable Energy Laboratory (NREL) defines small wind turbines as those smaller than or equal to 100 kilowatts. Small units often have direct-drive generators, direct current output, aeroelastic blades, and lifetime bearings and use a vane to point into the wind.
Wind turbine spacing
On most horizontal wind turbine farms, a spacing of about 6–10 times the rotor diameter is often upheld. However, for large wind farms, distances of about 15 rotor diameters should be more economical, taking into account typical wind turbine and land costs. This conclusion has been reached by research conducted by Charles Meneveau of Johns Hopkins University and Johan Meyers of Leuven University in Belgium, based on computer simulations that take into account the detailed interactions among wind turbines (wakes) as well as with the entire turbulent atmospheric boundary layer.
Research by John Dabiri of Caltech suggests that vertical wind turbines may be placed much more closely together so long as an alternating pattern of rotation is created allowing blades of neighbouring turbines to move in the same direction as they approach one another.
Operability
Maintenance
Wind turbines need regular maintenance to stay reliable and available. In the best case turbines are available to generate energy 98% of the time. Ice accretion on turbine blades has also been found to greatly reduce the efficiency of wind turbines, which is a common challenge in cold climates where in-cloud icing and freezing rain events occur. Deicing is mainly performed by internal heating or in some cases, by helicopters spraying clean warm water on the blades.
Modern turbines usually have a small onboard crane for hoisting maintenance tools and minor components. However, large, heavy components like generators, gearboxes, blades, and so on are rarely replaced, and a heavy lift external crane is needed in those cases. If the turbine has a difficult access road, a containerized crane can be lifted up by the internal crane to provide heavier lifting.
Repowering
Installation of new wind turbines can be controversial. An alternative is repowering, where existing wind turbines are replaced with bigger, more powerful ones, sometimes in smaller numbers while keeping or increasing capacity.
Demolition and recycling
Some wind turbines which are out of use are recycled or repowered. 85% of turbine materials are easily reused or recycled, but the blades, made of a composite material, are more difficult to process.
Interest in recycling blades varies in different markets and depends on the waste legislation and local economics. A challenge in recycling blades is related to the composite material, which is made of fiberglass with carbon fibers in epoxy resin, which cannot be remolded to form new composites.
Wind farm waste is less toxic than other garbage. Wind turbine blades represent only a fraction of overall waste in the US, according to the wind-industry trade association, American Wind Energy Association.
Several utilities, startup companies, and researchers are developing methods for reusing or recycling blades. Manufacturer Vestas has developed technology that can separate the fibers from the resin, allowing for reuse. In Germany, wind turbine blades are commercially recycled as part of an alternative fuel mix for a cement factory. In the United Kingdom, a project will trial cutting blades into strips for use as rebar in concrete, with the aim of reducing emissions in the construction of High Speed 2. Used wind turbine blades have been recycled by incorporating them as part of the support structures within pedestrian bridges in Poland and Ireland.
Comparison with other power sources
Advantages
Wind turbines is one of the lowest-cost sources of renewable energy along with solar panels. As technology needed for wind turbines continued to improve, the prices decreased as well. In addition, there is currently no competitive market for wind energy (though there may be in the future), because wind is a freely available natural resource, most of which is untapped. The main cost of small wind turbines is the purchase and installation process, which averages between $48,000 and $65,000 per installation. Usually, the total amount of energy harvested amounts to more than the cost of the turbines.
Wind turbines provide a clean energy source, use little water, emitting no greenhouse gases and no waste products during operation. Over of carbon dioxide per year can be eliminated by using a one-megawatt turbine instead of one megawatt of energy from a fossil fuel.
Disadvantages
Wind turbines can be very large, reaching over tall with blades long, and people have often complained about their visual impact.
Environmental impact of wind power includes effect on wildlife, but can be mitigated if proper strategies are implemented. Thousands of birds, including rare species, have been killed by the blades of wind turbines, though wind turbines contribute relatively insignificantly to anthropogenic avian mortality (birds killed by humans). Wind farms and nuclear power plants are responsible for between 0.3 and 0.4 bird deaths per gigawatt-hour (GWh) of electricity while fossil fuel power stations are responsible for about 5.2 fatalities per GWh. In comparison, conventional coal-fired generators contribute significantly more to bird mortality. A study on recorded bird populations in the United States from 2000 to 2020 found the presence of wind turbines had no significant effect on bird population numbers.
Energy harnessed by wind turbines is variable, and is not a "dispatchable" source of power; its availability is based on whether the wind is blowing, not whether electricity is needed. Turbines can be placed on ridges or bluffs to maximize the access of wind they have, but this also limits the locations where they can be placed. In this way, wind energy is not a particularly reliable source of energy. However, it can form part of the energy mix, which also includes power from other sources. Technology is also being developed to store excess energy, which can then make up for any deficits in supplies.
Wind turbines have blinking lights that warn aircraft, to avoid collisions. Residents living near windfarms, especially those in rural areas, have complained that the blinking lights are a bothersome form of light pollution. A light mitigation approach involves Aircraft Detection Lighting Systems (ADLSs) by which the lights are turned on, only when the ADLS's radar detects aircraft within thresholds of altitude and distance.
Records
| Technology | Electricity generation and distribution | null |
20548535 | https://en.wikipedia.org/wiki/Nameplate%20capacity | Nameplate capacity | Nameplate capacity, also known as the rated capacity, nominal capacity, installed capacity, maximum effect or gross capacity, is the intended full-load sustained output of a facility such as a power station, electric generator, a chemical plant, fuel plant, mine, metal refinery, and many others. Nameplate capacity is the theoretical output registered with authorities for classifying the unit. For intermittent power sources, such as wind and solar, nameplate power is the source's output under ideal conditions, such as maximum usable wind or high sun on a clear summer day.
Capacity factor measures the ratio of actual output over an extended period to nameplate capacity. Power plants with an output consistently near their nameplate capacity have a high capacity factor.
For electric power stations, the power output is expressed in Megawatt electrical (MWe). For fuel plants, it is the refinery capacity in barrels per day.
Power stations
Dispatchable power
For dispatchable power, this capacity depends on the internal technical capability of the plant to maintain output for a reasonable amount of time (for example, a day), neither momentarily nor permanently, and without considering external events such as lack of fuel or internal events such as maintenance. Actual output can be different from nameplate capacity for a number of reasons depending on equipment and circumstances.
Non-dispatchable power
For non-dispatchable power, particularly renewable energy, nameplate capacity refers to generation under ideal conditions. Output is generally limited by weather conditions, hydroelectric dam water levels, tidal variations and other outside forces. Equipment failures and maintenance usually contribute less to capacity factor reduction than the innate variation of the power source.
In photovoltaics, capacity is rated under Standard Test Conditions usually expressed as watt-peak (Wp). In addition, a PV system's nameplate capacity is sometimes denoted by a subindex, for example, MWDC or MWAC, to identify the raw DC power or converted AC power output.
Generator capacity
The term is connected with nameplates on electrical generators as these plates describing the model name and manufacturer usually also contain the rated output, but the rated output of a power station to the electrical grid is invariably less than the generator nameplate capacity, because the components connecting the actual generator to the "grid" also use power. Thus there is a distinction between component capacity and facility capacity.
| Technology | Concepts | null |
19363373 | https://en.wikipedia.org/wiki/Moons%20of%20Haumea | Moons of Haumea | The dwarf planet Haumea has two known moons, Hiʻiaka and Namaka, named after Hawaiian goddesses. These small moons were discovered in 2005, from observations of Haumea made at the large telescopes of the W. M. Keck Observatory in Hawaii.
Haumea's moons are unusual in a number of ways. They are thought to be part of its extended collisional family, which formed billions of years ago from icy debris after a large impact disrupted Haumea's ice mantle. Hiʻiaka, the larger, outermost moon, has large amounts of pure water ice on its surface, which is rare among Kuiper belt objects. Namaka, about one tenth the mass, has an orbit with surprising dynamics: it is unusually eccentric and appears to be greatly influenced by the larger satellite.
History
Two small satellites were discovered around Haumea (which was at that time still designated 2003 EL61) through observations using the W.M. Keck Observatory by a Caltech team in 2005.
The outer and larger of the two satellites was discovered 26 January 2005, and formally designated S/2005 (2003 EL61) 1, though nicknamed "Rudolph" by the Caltech team. The smaller, inner satellite of Haumea was discovered on 30 June 2005, formally termed S/2005 (2003 EL61) 2, and nicknamed "Blitzen". On 7 September 2006, both satellites were numbered and admitted into the official minor planet catalogue as (136108) 2003 EL61 I and II, respectively.
The permanent names of these moons were announced, together with that of 2003 EL61, by the International Astronomical Union on 17 September 2008: (136108) Haumea I Hiʻiaka and (136108) Haumea II Namaka. Each moon was named after a daughter of Haumea, the Hawaiian goddess of fertility and childbirth. Hiʻiaka is the goddess of dance and patroness of the Big Island of Hawaii, where the Mauna Kea Observatory is located. Nāmaka is the goddess of water and the sea; she cooled her sister Pele's lava as it flowed into the sea, turning it into new land.
In her legend, Haumea's many children came from different parts of her body. The dwarf planet Haumea appears to be almost entirely made of rock, with only a superficial layer of ice; most of the original icy mantle is thought to have been blasted off by the impact that spun Haumea into its current high speed of rotation, where the material formed into the small Kuiper belt objects in Haumea's collisional family. There could therefore be additional outer moons, smaller than Namaka, that have not yet been detected. However, HST observations have confirmed that no other moons brighter than 0.25% of the brightness of Haumea exist within the closest tenth of the distance (0.1% of the volume) where they could be held by Haumea's gravitational influence (its Hill sphere). This makes it unlikely that any more exist.
Surface properties
Hiʻiaka is the outer and, at roughly 310 km in diameter, the larger and brighter of the two moons. Strong absorption features observed at 1.5, 1.65 and 2 μm in its infrared spectrum are consistent with nearly pure crystalline water ice covering much of its surface. The unusual spectrum, and its similarity to absorption lines in the spectrum of Haumea, led Brown and colleagues to conclude that it was unlikely that the system of moons was formed by the gravitational capture of passing Kuiper belt objects into orbit around the dwarf planet: instead, the Haumean moons must be fragments of Haumea itself.
The sizes of both moons are calculated with the assumption that they have the same infrared albedo as Haumea, which is reasonable as their spectra show them to have the same surface composition. Haumea's albedo has been measured by the Spitzer Space Telescope: from ground-based telescopes, the moons are too small and close to Haumea to be seen independently. Based on this common albedo, the inner moon, Namaka, which is a tenth the mass of Hiʻiaka, would be about 170 km in diameter.
The Hubble Space Telescope (HST) has adequate angular resolution to separate the light from the moons from that of Haumea. Photometry of the Haumea triple system with HST's NICMOS camera has confirmed that the spectral line at 1.6 μm that indicates the presence of water ice is at least as strong in the moons' spectra as in Haumea's spectrum.
The moons of Haumea are too faint to detect with telescopes smaller than about 2 metres in aperture, though Haumea itself has a visual magnitude of 17.5, making it the third-brightest object in the Kuiper belt after Pluto and Makemake, and easily observable with a large amateur telescope.
Orbital characteristics
Hiʻiaka orbits Haumea nearly circularly every 49 days. Namaka orbits Haumea in 18 days in a moderately elliptical, non-Keplerian orbit, and as of 2008 was inclined 13° with respect to Hiʻiaka, which perturbs its orbit. Because the impact that created the moons of Haumea is thought to have occurred in the early history of the Solar System, over the following billions of years it should have been tidally damped into a more circular orbit. Namaka's orbit has likely been disturbed by orbital resonances with the more-massive Hiʻiaka due to converging orbits as they moved outward from Haumea due to tidal dissipation. They may have been caught in and then escaped from orbital resonance several times; they currently are in or at least close to an 8:3 resonance. This resonance strongly perturbs Namaka's orbit, which has a current precession of its argument of periapsis by about −6.5° per year, a precession period of 55 years.
From around 2008 to 2011, the orbits of the Haumean moons appeared almost exactly edge-on from Earth, with Namaka periodically occulting Haumea. Observation of such transits would have provided precise information on the size and shape of Haumea and its moons, as happened in the late 1980s with Pluto and Charon. The tiny change in brightness of the system during these occultations would have required at least a medium-aperture professional telescope for detection. Hiʻiaka last occulted Haumea in 1999, a few years before discovery, and will not do so again for some 130 years. However, in a situation unique among regular satellites, Namaka's orbit was being greatly torqued by Hiʻiaka, which preserved the viewing angle of Namaka–Haumea transits for several more years. One occultation event was observed on 19 June 2009, from the Pico dos Dias Observatory in Brazil.
| Physical sciences | Solar System | Astronomy |
3428791 | https://en.wikipedia.org/wiki/Blue%20hour | Blue hour | The blue hour (from French ; ) is the period of twilight (in the morning or evening, around the nautical stage) when the Sun is at a significant depth below the horizon. During this time, the remaining sunlight takes on a mostly blue shade. This shade differs from the colour of the sky on a clear day, which is caused by Rayleigh scattering.
The blue hour occurs when the Sun is far enough below the horizon so that the sunlight's blue wavelengths dominate due to the Chappuis absorption caused by ozone. Since the term is colloquial, it lacks an official definition such as dawn, dusk, or the three stages of twilight. Rather, blue hour refers to the state of natural lighting that usually occurs around the nautical stage of the twilight period (at dawn or dusk).
The blue hour is shorter in regions near the equator due to the sun rising and setting at steep angles. In places closer to the poles, the illumination and twilight periods are longer as the sun rises and sets at shallower angles.
Explanation and times of occurrence
The still commonly presented incorrect explanation claims that Earth's post-sunset and pre-sunrise atmosphere solely receives and disperses the sun's shorter blue wavelengths and scatters the longer, reddish wavelengths to explain why the hue of this hour is so blue. In fact, the blue hour occurs when the Sun is far enough below the horizon so that the sunlight's blue wavelengths dominate due to the Chappuis absorption caused by ozone.
When the sky is clear, the blue hour can be a colourful spectacle, with the indirect sunlight tinting the sky yellow, orange, red, and blue. This effect is caused by the relative diffusibility of shorter wavelengths (bluer rays) of visible light versus the longer wavelengths (redder rays). During the blue "hour", red light passes through space while blue light is scattered in the atmosphere, and thus reaches Earth's surface.
Blue hour usually lasts about 20–96 minutes right after sunset and right before sunrise. Time of year, location, and air quality all have an influence on the exact time of blue hour. For instance in Egypt (every 21st of June), when sunset is at 7:59 PM: blue hour occurs from 7:59 PM to 9:35 PM. When sunrise is at 5:54 AM: blue hour occurs from 4:17 AM to 5:54 AM. Golden hour occurs from 5:54 AM to 6:28 AM and from 7:25 PM to 7:59 PM.
Art
Photography
Many artists value this period for the quality of the soft light. Although the blue hour does not have an official definition, the blue color spectrum is most prominent when the Sun is between 4° and 8° below the horizon.
Photographers use blue hour for the tranquil mood it sets. When photographing during blue hour it can be favourable to capture subjects that have artificial light sources, such as buildings, monuments, cityscapes, or bridges.
| Physical sciences | Celestial mechanics | Astronomy |
3430618 | https://en.wikipedia.org/wiki/Temnodontosaurus | Temnodontosaurus | Temnodontosaurus (meaning "cutting-tooth lizard") is an extinct genus of large ichthyosaurs that lived during the Lower Jurassic in what is now Europe and possibly Chile. The first known fossil is a specimen consisting of a complete skull and partial skeleton discovered on a cliff by Joseph and Mary Anning around the early 1810s in the Dorset county, England. The anatomy of this specimen was subsequently analyzed in a series of articles written by Everard Home between 1814 and 1819, making it the very first ichthyosaur to have been scientifically described. In 1822, the specimen was assigned to the genus Ichthyosaurus by William Conybeare, and more precisely to the species I. platyodon. Noting the large dental differences with other species of Ichthyosaurus, Richard Lydekker suggested in 1889 moving this species into a separate genus, which he named Temnodontosaurus. While many species have been assigned to the genus, only five are currently recognized as valid, the others being considered as synonymous, doubtful or possibly belonging to other taxa.
Generally estimated at long, Temnodontosaurus is one of the largest known ichthyosaurs, although not as imposing as some Triassic forms. Specimens assigned to the genus may nevertheless have reached larger measurements. As an ichthyosaur, Temnodontosaurus had flippers for limbs and a fin on the tail. Boasting eye sockets measuring more than wide, Temnodontosaurus quite possibly had the largest eyes known in the entire animal kingdom, rivaling in size those of the colossal squid. The snout appears to be longer than the mandible, being equipped with several sharp teeth (hence its name). On the basis of numerous very complete skeletons, it is estimated that the animal had at least more than 40 presacral vertebrae. Temnodontosaurus is a basal representative of the parvipelvian subgroup of ichthyosaurs, in addition to being its largest representative. A monotypic family, Temnodontosauridae, was even established in 1974 to include the genus. Various phylogenetic analyses as well as diagnostic problems concerning the genus make it, for the moment, a polyphyletic taxon (unnatural grouping), and therefore in need of revision.
Research history
Discovery and identification
Temnodontosaurus is historically the very first ichthyosaur to have been scientifically described. Around 1810, a certain Joseph Anning discovered the first skull of the taxon on the cliffs of Black Ven, between the town of Lyme Regis and the village of Charmouth, two localities located in the county of Dorset, in the south of England. The remaining skeleton was later discovered by his sister, the now famous Mary Anning, in 1812. Although other ichthyosaur skeletons had been discovered locally and elsewhere, this particular specimen was the first to attract attention of the scientific community. After the discovery was announced in the press, the specimen was purchased by the lord of a local manor, Henry Hoste Henley, for a price of £23. Subsequently, Henley passed the fossils on to the naturalist William Bullock, who put them on display in the collections of his museum in London. In 1819, Bullock's own collection was sold to the Natural History Museum in London for a price of around £47. The specimen, now cataloged as NHMUK PV R1158, is still currently housed at this museum, although the postcranial remains have since been lost.
Beginning in 1814, Everard Home wrote a series of six papers for the Royal Society describing the specimen, initially identifying it as a crocodile. Perplexed as to the real nature of the fossil, Home kept changing his mind about its classification, thinking that it would be a fish, then as an animal sharing affinities with the platypus, which was then recently described at that time. Finally, in 1819, he thought that the fossil represented an animal that embodied an intermediate form between salamanders and lizards, which led him to erect the genus name Proteosaurus (originally written as Proteo-Saurus). In 1821, Henry De la Beche and his colleague William Daniel Conybeare made the very first scientific description of Ichthyosaurus, but did not name any species. Although being initially a nomen nudum, this generic name was already proposed in 1818 by Charles Konig, but was thus chosen as the definitive scientific name of this genus, Proteosaurus having since become a nomen oblitum. In their article, De la Beche and Conybeare refer several additional fossils discovered at Black Ven to this genus, also including the specimen originally described by Home, and finally identify it as a marine reptile. It was in 1822 that De la Beche named three species of Ichthyosaurus on the basis of several anatomical differences distinguishing the specimens, one of them being I. platyodon. He nevertheless announces that future descriptions will be done with the help of Conybeare. However, it was Conybeare himself who described the fossils the same year, attributing the largest specimens to I. platyodon, The specific name platyodon comes from the Ancient Greek (, "flat", "broad"), and (, "tooth"), all meaning "flat teeth", in reference to the rather distinctive dentition of this species.
In 1889, Henry Alleyne Nicholson and Richard Lydekker published a two-volume work that served as an introduction to the rules of paleontology for students. However, it is in the second volume that the two paleontologists gave a very detailed description of numerous prehistoric vertebrates, and during which the taxonomy of I. platyodon took another direction. Indeed, in his correction notes, Lydekker noticed that the teeth of I. platyodon had great differences from those of other previously recognized species of Ichthyosaurus and suggest that the latter could be the type species of a completely new genus of ichthyosaurs, which he named Temnodontosaurus. This generic name is formed from the Ancient Greek (, "to cut"), (, "tooth"), and (saûros, "lizard"), to give "cutting-tooth lizard". In a broad review of fossil vertebrates published in 1902, Oliver Perry Hay suggested that because the name Proteosaurus technically took precedence over Ichthyosaurus, he then displaced I. platyodon as the type species of that genus, then renamed Proteosaurus platyodon. In 1972, Christopher McGowan again used this combination proposed by Hay (although not mentioned), but the latter revised his judgment two years later, in 1974, in which he moved this species to Temnodontosaurus, as originally proposed by Lydekker. The holotype of Temnodontosaurus platyodon consisted of a single tooth which was preserved by the Geological Society of London. As the latter has since been noted as lost in 1960, McGowan designated specimen NHMUK PV R2003 as the neotype of this taxon. This specimen, already mentioned as a representative of the species by Richard Owen in 1881, was originally discovered and partly collected by Mary Anning in July 1832 in Lyme Regis. After the discovery, she sold the find to Thomas Hawkins, who himself sold the specimen to the Natural History Museum in London in 1834 for a price of £210.
Other species
Recognized species
In 1843, described a new species of Ichthyosaurus, I. trigonodon, which he described as "colossal", based on an imposing specimen comprising a complete skull and a partial postcranial skeleton discovered in the town of Holzmaden in the state of Baden-Württemberg, Germany. The specific name comes from Ancient Greek (, "triangle") and (, "tooth"), in reference to the dental crown which is visibly triangular in this species. In 1854, von Theodori made a much more in-depth description of the holotype specimen. In 1889, only some time before he established the genus Temnodontosaurus, Lydekker noted that the dentition of I. trigonodon was quite similar to that of I. platyodon. Based on these dental characteristics, he moved this species to the genus Temnodontosaurus the following year, consequently being renamed T. trigonodon. In 1931, Friedrich von Huene transferred this species to the genus Leptopterygius. In 1998, Michael W. Maisch moved the species again to the genus Temnodontosaurus, and attributed to this taxon other, mostly very complete, specimens having been discovered in Germany and France. This classification has since been retained in subsequent works, to the point that a large specimen discovered in England in 2021, nicknamed as the ‘Rutland Sea Dragon’, was even considered as the first probable representative within this country. However, a 2023 morphological study conducted by Rebecca F. Bennion and colleagues shows that the holotype specimen differs in cranial and dental traits from other specimens since assigned to the species. The authors therefore suggest that a future re-evaluation is necessary for better a diagnosis of T. trigonodon.
In 1857, an almost complete skeleton of a large ichthyosaur was discovered north of the English town of Whitby, located in the Yorkshire county. The latter was also found near another skeleton, that of a pliosaur, which is today recognized as the holotype specimen of Rhomaleosaurus cramptoni. Shortly after its discovery, the ichthyosaurian skeleton was subsequently sent to the Yorkshire Museum, where it was cataloged as YORYM 497. The following year, 1858, Owen examined the specimen and classified it as distinct from I. platyodon, attributing it to a completely new species which he named I. crassimanus. However, the latter was never scientifically described by Owen, although it is briefly mentioned in a work by John Phillips and Robert Etheridge published in 1875. It was in 1876 that John Frederick Blake made the first scientific description of the animal, although he did so only very briefly. In 1889, Lydekker considered this species as a potential junior synonym of I. trigonodon, an opinion which was followed by numerous authors until around the beginning of the 20th century. In 1930, Sidney Melmore made the first in-depth description of I. crassimanus based on the holotype specimen, restoring the distinct status of the species. In his revision published in 1974, McGowan synonymized I. crassimanus with the proposed taxon Stenopterygius acutirostis, also attributing other specimens discovered in the original locality. In 2003, McGowan and Ryosuke Motani suggested that all specimens historically attributed to I. crassimanus appeared sufficiently different from T. platyodon and T. trigonodon to belong to a distinct species of the genus Temnodontosaurus, being renamed T. crassimanus. However, they note that further research could question its validity. Long remaining an under-analyzed taxon, it was in 2020 that Emily J. Swaby wrote a thesis considerably revising it. Contrary to the suggestion previously made by McGowan and Motani, Swaby maintains the attribution of this species to Temnodontosaurus. The descriptions of this same thesis were finally published the following year in a study co-authored with Daniel R. Lomax.
In 1880, Harry Govier Seeley described the species I. zetlandicus on the basis of a well-preserved skull loaned by an Earl of Shetland (hence its name) around an unspecified date to the Sedgwick Museum of Earth Sciences in Cambridge, in the Cambridgeshire county. This skull, cataloged as CAMSM J35176, was discovered in the coasts of Whitby, near the locality where T. crassimanus was already discovered. In 1922, von Huene moved the species within Stenopterygius. In 1974, McGowan considered S. zetlandicus as a synonym of S. acutirostris, before this species was itself synonymized with T. acutirostris from 1997. In 2022, Antoine Laboury and colleagues reestablished the validity of the species by redescribing CAMSM J35176, but moved it to the genus Temnodontosaurus, being renamed T. zetlandicus. In their description, they attribute another specimen to the taxon, cataloged as MNHNL TU885, a partial skull which was originally discovered in Schouweiler, southern Luxembourg.
In 1931, von Huene described a new species of the genus Leptopterygius, L. nürtingensis, based on a skull and some postcranial remains of a single specimen discovered in a quarry in the town of Nürtingen (hence its name), Baden-Württemberg, Germany. This specimen, cataloged as SMNS 13488, is mentioned for the first time in a work by Eberhard Fraas published posthumously in 1919, in which the author considers it to be the representative of an undetermined species of Ichthyosaurus. In another work also published posthumously in 1926, Fraas attributed this specimen to a proposed new species which he named I. bellicosus. Fraas was initially expected to carry out the first scientific description of this taxon, but the latter's premature death in 1915 prevented this project from being achieved. Thus, in the absence of a scientific description, the name I. bellicosus is seen as a nomen nudum, and therefore does not have priority over L. nürtingensis. Although L. nürtingensis was only officially described in 1931 by von Huene, the taxon was already mentioned a year earlier by the same author in an article concerning the ribs of the holotype specimen, which have since been noted as lost. In 1939, Oskar Kuhn assimilated an incomplete specimen discovered in Aue-Fallstein, Lower Saxony, to this species. However, Kuhn did not present sufficient evidence to confirm his claims, and the specimen has since been viewed as indeterminate. In 1979, McGowan carried out a large revision of the ichthyosaurs known from Germany, in which he classified L. nürtingensis as a nomen dubium. In 1997, Maisch and Axel Hungerbühler formally criticized McGowan's view, given that the holotype specimen is preserved in an excellent state of conservation and is easily diagnosable. He then redescribed this specimen and considered it to be attributable to Temnodontosaurus. In their analysis, the authors change the typography of the species nürtingensis to nuertingensis, due to rule 32.C of the ICZN requiring it. The species is again considered a nomen dubium by McGowan and Motani in 2003, but its validity as well as its belonging to this genus is maintained in subsequent studies.
Dubious species
In 1881, Owen attributed a large isolated skull discovered at Lyme Regis, cataloged as NHMUK PV R1157, to the newly erected species of the genus Ichthyosaurus, I. breviceps. In 1922, von Huene moved this species to the genus Eurypterygius, a taxon which is itself recognized as a junior synonym of Ichthyosaurus. Although I. breviceps is still recognized as belonging to this genus, the large skull historically attributed to the species has large differences with the holotype specimen. Noting this, McGowan redescribed this specimen in more detail and made it the holotype of an entirely new species of Temnodontosaurus, T. eurycephalus. The specific name comes from the Ancient Greek (, "broad"), and (, "head"), all meaning "broad head", in reference to the cranial morphology of the taxon.
In 1984, an almost complete skeleton of a large ichthyosaur was discovered in the Lafarge quarries in the French commune of Belmont-d'Azergues, located near Lyon. Although the specimen is mentioned in a detailed biostratigraphic analysis of the Lafarge quarries published in 1991, it was in 2012 when the fossil, uncatalogued but stored in the Saint-Pierre-la-Palud , was officially designated as the holotype of the new species T. azerguensis by Jeremy E. Martin and his colleagues. The specific name comes from the Azergues, a river located near the site of the discovery.
In 2014, American paleontologist Darren Naish expressed doubts in a blog in the journal Scientific American about the attribution of these two species to Temnodontosaurus, noting their large anatomical differences highlighting the need for a taxonomic revision of this genus. A similar observation is shared in the study describing T. zetlandicus in 2022, with the authors mentioning these two species as too phylogenetically unstable to be included in a monophyletic grouping of Temnodontosaurus.
Formerly assigned species
In 1840, Owen named the species I. acutirostris on the basis of a partial skeleton discovered near Whitby, now numbered NHMUK PV OR 14553. The holotype specimen was for a long time noted as lost, but this was only in 2002 when it was officially found, although some anatomical parts such as the snout are missing. Even before the specimen was found, some studies classified this species within the genus Temnodontosaurus, as Maisch and Hungerbühler did in 1997. However, Maisch reversed his decision in 2010, citing that numerous cranial features prove it is not part of the genus. Therefore, the author withdraws his attribution of this specimen to Temnodontosaurus and instead suggests that the latter would be the representative of a completely new genus. In 2022, Laboury and colleagues share the same conclusions and considers "I." acutirostris as a species inquirenda.
In 1892, Albert Gaudry officially described a new species of Ichthyosaurus, I. burgundiae, on the basis of a specimen discovered in the quarries of the town of Sainte-Colombe, in Yonne, France. Even before the taxon was described by Gaudry, the specimen, being one of the largest ichthyosaurs known at the time, led to it being presented at the 1889 Paris Exposition, the same exhibition for which the Eiffel Tower was built. After the end of the exhibition, the specimen was subsequently donated to the National Museum of Natural History in Paris, joining its collection on November 12, 1889, where it is still exhibited to this day. Gaudry already proposed the name of I. burgundiae at the French Academy of Sciences in 1891, but it was not until the following year that he published the first formal description of the taxon. In 1996, McGowan attributed a number of specimens discovered in Germany to this species, but moved it there to the genus Temnodontosaurus. In 1998, Maisch compared these specimens to the holotype of T. trigonodon, and suggested synonymizing T. burgundiae with the latter. Maisch's opinion is followed by McGowan and Motani in 2003, considering T. burgundiae as a junior synonym of T. trigonodon, despite slight osteological differences. The synonymy is however based only on German specimens, a new examination of the holotype specimen discovered in Sainte-Colombe having never been carried out due to its questionable state of conservation.
In 1974, McGowan described an additional species of Temnodontosaurus, T. risor, based on three skulls discovered at Lyme Regis, designating NHMUK PV R43971 as the holotype specimen. The specific name of this taxon comes from the Latin Risor, meaning "mocker". In his description, he justifies the distinction of this species via the larger orbits, the smaller maxillae and the curved snout. In 1995, the same author carried out a more in-depth revision of the three specimens attributed to this taxon. He then discovered that the characteristics he had previously judged to be distinctive were in fact stages of growth, the three specimens representing juveniles of T. platyodon.
Early depictions
One of the earliest representations of Temnodontosaurus in paleoart is a life-size concrete sculpture created by Benjamin Waterhouse Hawkins between 1852 and 1854, as part of the collection of sculptures of prehistoric animals on display at the Crystal Palace Park in London. It is one of the three ichthyosaur sculptures exhibited in the park, the other two representing Ichthyosaurus and Leptonectes. Although the park is known for its obsolete or even false reconstructions of extinct animals, the sculptures depicting ichthyosaurs have the most elements still recognized as valid. These recognized features include smooth, scaleless skin, a fin at the end of the tail, and large eyes. Hawkins also reconstructed the facial features of these three ichthyosaurs based on those of whales and dolphins, which is still recognized as reasonable given their strong morphological similarities. Discoveries and reconstructions subsequent to those at the Crystal Palace add to this the presence of a dorsal fin, a caudal fin with two crescent-shaped lobes as well as a reconstruction of the skin on the basis of most well-preserved fossils.
Many elements of these reconstructions still remain obsolete, such as the eyes and the flippers which are reconstructed by the shape of their bones, namely the sclerotic rings and the phalanges. Although Owen suggested the still viable hypothesis that the scleral rings served to protect the eye, it is highly unlikely that the eyes of ichthyosaurs would have looked as shown in the carvings, given that the scleral rings are located under the eyelids. The flippers were faithfully reconstructed by Hawkins based on Owen's misinterpretation of the phalanges as scales. The park's ichthyosaurs are depicted as crawling in shallow water, reflecting the ancient hypothesis that they came to the shores to sleep or to breed. Additionally, their tails are shown to be eel-like and having a great degree of flexibility. However, the three ichthyosaurs actually had a fairly variable degree of flexibility. Two of the three taxa shown, i. e. Temnodontosaurus and Leptonectes, were found to have much more flexible tails than that of Ichthyosaurus, the latter having a tuna-like morphology. This way of reconstructing the tail of ichthyosaurs as similar to those of eels is not an error specific to Hawkins, being the norm in reconstructions dating from the 19th century.
Description
Temnodontosaurus, like other ichthyosaurs, had a long, thin snout, large eye sockets, and a tail fluke that was supported by vertebrae in the lower half. Ichthyosaurs were superficially similar to dolphins and had flippers rather than legs, and most (except for early species) had dorsal fins. Although the colour of Temnodontosaurus is unknown, at least some ichthyosaurs may have been uniformly dark-coloured in life, which is evidenced by the discovery of high concentrations of eumelanin pigments in the preserved skin of an early ichthyosaur fossil.
Size
Temnodontosaurus is one of the largest ichthyosaurs identified to date, although the species which belong to it are not as imposing as Triassic forms like Shonisaurus, Himalayasaurus, Cymbospondylus or Ichthyotitan. It nevertheless represents the largest known ichthyosaur of the parvipelvian group. Based on different specimens, the species T. platyodon, T. trigonodon and T. crassimanus have a body size which is estimated to be around long. The ‘Rutland Sea Dragon’, a possible specimen of T. trigonodon discovered in January 2021 in the Rutland Water, near Oakham, is estimated to be slightly over long. Skull size varies between these three species. Although incomplete, the holotype specimen of T. crassimanus would have had a skull estimated to be around long. The largest known skulls of T. trigonodon and T. platyodon are to long, respectively. No body length estimates for T. zetlandicus and T. nuertingensis have currently been given. However, the measurement of their skull, reaching respectively in length, suggests that they are smaller representatives when compared to the three species previously mentioned.
Individual bones suggest that Temnodontosaurus may have grown to a larger size. In his extensive revision published in 1922, von Huene described a series of very imposing vertebrae from from the collections of the Banz Abbey Museum, Germany, the largest of them measuring high. In 1996, McGowan nominally assigned the specimen to Temnodontosaurus, although without specific assignment. Based on SMNS 50000, a nearly complete skeleton of T. trigonodon, the author estimated the size of Banz's specimen at long, as Huene initially suggested. However, the estimate he proposes turns out to be exaggerated, given that the source of its size is incorrect based on the actual measurements of the specimen SMNS 50000, which is of a shorter length.
Morphology
The forefins and hindfins of Temnodontosaurus were of roughly the same length and were rather narrow and elongated. This characteristic is unlike other post-Triassic ichthyosaurs such as the thunnosaurians, which had forefins at least twice the length of their hindfins. It was different from other post-Triassic ichthyosaurs like Ichthyosaurus, possessing an unreduced, tripartite pelvic girdle and having only three primary digits with one postaxial accessory digit. Like other ichthyosaurs, the fins exhibited strong hyperphalangy, but the fins were not involved in body propulsion; only the tail was used as the main propulsive force for movement, although it had a weak tail bend at an angle of less than 35°. Its caudal fin has variously been described as either lunate or semi-lunate; it was made of two lobes, in which the lower lobe was skeletally supported whereas the upper lobe was unsupported. The proximal elements of the fin formed a mosaic pattern, while the more distal elements were relatively round. It also had a triangular dorsal fin and had two notches on the fin's anterior margin; the paired fins were used to steer and stabilize the animal while swimming instead of paddling or propulsion devices. It had roughly less than 90 vertebrae, and the axis and atlas of the vertebrae were fused together, serving as stabilization during swimming. T. trigonodon possessed unicipital ribs near the sacral region and the bicipital ribs more anteriorly, which helped to increase flexibility while swimming.
Like other ichthyosaurs, Temnodontosaurus likely had high visual capacity and used vision as its primary sense while hunting. Temnodontosaurus had the largest eyes of any ichthyosaur and of any animal measured. The largest eyes measured belonged to the species T. platyodon. Despite the enormous size of its eyes, Temnodontosaurus had blind spots directly above its head due to the angle at which its eyes were pointed. The eyes of Temnodontosaurus had sclerotic rings, hypothesized to have provided the eyes with rigidity. The sclerotic rings of T. platyodon were at least 25 cm in diameter.
The head of Temnodontosaurus had a long robust snout with an antorbital constriction. It also had an elongated maxilla, a long cheek region, and a long postorbital segment. The carotid foramen in the basisphenoid in the skull was paired and was separated by the parasphenoid. The parasphenoid had a processus cultriformis. The skull of T. platyodon measured about long, while T. eurycephalus had a shorter rostrum and a deeper skull compared to other species, perhaps serving to help crush prey. T. platyodon and T. trigadon had a very long snout that was slightly curved on its dorsal side and ventrally curved, respectively. It also had many pointed conical teeth that were set in continuous grooves, rather than having individual sockets. This form of tooth implantation is known as aulacodonty. Its teeth typically had two or three carinae; notably, T. eurycephalus possessed bulbous roots, while T. nuertingensis had no canine or bulbous roots.
Classification
The majority of the currently recognized species of Temnodontosaurus were originally described as species of Ichthyosaurus, before the type species T. platyodon was moved to a separate genus in 1889 by Lydekker. In 1974, McGowan established the family Temnodontosauridae, to which it is still the only genus recognized. Temnodontosaurus is one of the most basal post-Triassic ichthyosaurs. In the first major phylogenetic revision of ichthyosaurs, carried out in 1999 by Motani, Temnodontosaurus is placed in the Parvipelvia clade. It is this specific group of ichthyosaurs that includes all of the "fish-shaped" representatives, with the more basal ichthyosaurs having more elongated body plans. In 2000, erected a new clade within this subgroup, which he named Neoichthyosauria. This clade notably brings together Temnodontosaurus, Suevoleviathan, Leptonectidae and Thunnosauria, the latter including all of the ichthyosaurs that lived until the Cretaceous. For reasons of classification convenience, McGowan and Motani established the superfamily Temnodontosauroidea in 2003. In their phylogenetic revision published in 2016, Ji and colleagues classify Leptonectidae within this proposed superfamily, recovering Temnodontosaurus as the latter's sister taxon. However, other classifications clearly do not follow this model, preferring to stick to the definition of Neoichthyosauria as previously mentioned.
For several decades, Temnodontosaurus was a taxon whose monophyly was rarely questioned. The current diagnostic of the genus was first established in the revision made by McGowan in 1974 based on some cranial and postcranial characteristics. However, as the cranial features of aquatic tetrapods are strongly influenced by convergent evolution, this does not seem ideal for establishing a stable taxonomy. Thus, since the late 1990s, many authors, including McGowan himself, have advocated that Temnodontosaurus needs to be revised. Additionally, numerous recent phylogenetic analyzes showing that the genus as currently defined is polyphyletic, with some historically assigned species being unrelated each other. Thus, pending future studies, Temnodontosaurus is currently seen as a wastebasket taxon including some large, more or less related neoichthyosaurians dating from the Lower Jurassic. In the last major study investigating the taxonomy of this genus, having been carried out by Laboury et al. (2022), only four species appear to form a monophyletic grouping, namely T. platyodon, T. trigonodon, T. zetlandicus and T. nuertingensis.
Below, a simplified cladogram based on a Bayesian analysis conducted by Laboury et al. (2022):
Paleobiology
With their dolphin-like bodies, ichthyosaurs were better adapted to their aquatic environment than any other group of marine reptiles. They were viviparous that gave birth to live young and were likely incapable of leaving the water. As homeotherms ("warm-blooded") with high metabolic rates, ichthyosaurs would have been active swimmers. Jurassic and Cretaceous ichthyosaurs, including Temnodontosaurus, had evolved a thunniform method of swimming rather than the anguilliform (undulating or eel-like) methods of earlier species. Temnodontosaurus, particularly the species T. trigonodon, is quite flexible in morphology for a parvipelvian, using its imposing flippers to maneuver under water.
Ichthyosaurs have the largest eyes of any known vertebrates, with Temnodontosaurus having the largest identified. The sclerotic rings in their eyes would have served to resist aquatic pressures. The eyes of ichthyosaurs like those of Temnodontosaurus would have a great visual capacity via the high number of photoreceptor cells. In addition to good eyesight, the enlarged olfactory region of the brain indicates that ichthyosaurs had a sensitive sense of smell.
Diet and feeding
Paleontologists generally agree that Temnodontosaurus was likely an active predator of a variety of other marine animals. Fauna hunted by the genus include bony fish, cephalopods, and aquatic reptiles, including even other ichthyosaurs. The skeletal anatomy of Temnodontosaurus suggests that it may have been an ambush predator. A particular skeleton of T. trigonodon (SMNS 50000) preserves in its stomach the remains of three juvenile Stenopterygius accompanied by a large number of cephalopod hooks. This proves that the animal was indeed an apex predator, but its diet consisted mainly of molluscs, with the large number of undigested hooks being compressed into a large gastric mass.
Paleoecology
Western Europe
In Europe, Temnodontosaurus is mainly known from fossils dating from the various stages of the Lower Jurassic of England, Germany, France and Luxembourg, with nevertheless some more or less fragmentary specimens having been reported in Belgium, in Italy, and in Switzerland.
Chile
While Temnodontosaurus was historically known only from Europe, a fragmentary specimen was discovered in 1988 in the Atacama Desert, Chile. This specimen, consisting of fragmentary remains of the jaws and catalogued since as SGO.PV.324, was later rediscovered in 2016 in the collections of the National Museum of Natural History of Chile in Santiago, and first described in 2020. It comes more precisely from the volcanic beds of the La Negra Formation, dating from the Early Jurassic. The presence of ammonites potentially belonging to the genera Arnioceras and Paracoronicera indicates that the formation probably dates from the Sinemurian. The fossil record of vertebrates discovered in what is now northern Chile is currently very thin, but remains quite similar to that found in Europe. At that time, northern Chile was submerged by the southeastern part of the ancient superocean Panthalassa. Among the vertebrates identified are leptolepids, actinopterygians already present in Europe. Apart from Temnodontosaurus itself, the marine reptiles identified include thalattosuchians and undetermined plesiosaurs. The presence of these taxa in northern Chile could be explained by an interfaunal exchange between Thetys and Panthalassa, although other evidence suggesting this remains thin.
| Biology and health sciences | Prehistoric marine reptiles | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.